POPULARITY
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
This is a recap of the top 10 posts on Hacker News on January 18, 2026. This podcast was generated by wondercraft.ai (00:30): jQuery 4Original post: https://news.ycombinator.com/item?id=46664755&utm_source=wondercraft_ai(01:59): Gaussian Splatting – A$AP Rocky "Helicopter" music videoOriginal post: https://news.ycombinator.com/item?id=46670024&utm_source=wondercraft_ai(03:28): Iconify: Library of Open Source IconsOriginal post: https://news.ycombinator.com/item?id=46665411&utm_source=wondercraft_ai(04:57): Predicting OpenAI's ad strategyOriginal post: https://news.ycombinator.com/item?id=46668021&utm_source=wondercraft_ai(06:26): Statement by Denmark, Finland, France, Germany, the Netherlands,Norway,Sweden,UKOriginal post: https://news.ycombinator.com/item?id=46669025&utm_source=wondercraft_ai(07:55): A Social FilesystemOriginal post: https://news.ycombinator.com/item?id=46665839&utm_source=wondercraft_ai(09:24): Command-line Tools can be 235x Faster than your Hadoop Cluster (2014)Original post: https://news.ycombinator.com/item?id=46666085&utm_source=wondercraft_ai(10:54): Erdos 281 solved with ChatGPT 5.2 ProOriginal post: https://news.ycombinator.com/item?id=46664631&utm_source=wondercraft_ai(12:23): The Nobel Prize and the Laureate Are InseparableOriginal post: https://news.ycombinator.com/item?id=46669404&utm_source=wondercraft_ai(13:52): Police Invested Millions in Shadowy Phone-Tracking Software Won't Say How UsedOriginal post: https://news.ycombinator.com/item?id=46672150&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
This is a recap of the top 10 posts on Hacker News on January 09, 2026. This podcast was generated by wondercraft.ai (00:30): Anthropic blocks third-party use of Claude Code subscriptionsOriginal post: https://news.ycombinator.com/item?id=46549823&utm_source=wondercraft_ai(01:50): The Vietnam government has banned rooted phones from using any banking appOriginal post: https://news.ycombinator.com/item?id=46555963&utm_source=wondercraft_ai(03:10): Cloudflare CEO on the Italy finesOriginal post: https://news.ycombinator.com/item?id=46555760&utm_source=wondercraft_ai(04:30): Show HN: I made a memory game to teach you to play piano by earOriginal post: https://news.ycombinator.com/item?id=46556210&utm_source=wondercraft_ai(05:50): European Commission issues call for evidence on open sourceOriginal post: https://news.ycombinator.com/item?id=46550912&utm_source=wondercraft_ai(07:10): Mathematics for Computer Science (2018) [pdf]Original post: https://news.ycombinator.com/item?id=46550895&utm_source=wondercraft_ai(08:31): Kagi releases alpha version of Orion for LinuxOriginal post: https://news.ycombinator.com/item?id=46553343&utm_source=wondercraft_ai(09:51): Flock Hardcoded the Password for America's Surveillance Infrastructure 53 TimesOriginal post: https://news.ycombinator.com/item?id=46555807&utm_source=wondercraft_ai(11:11): “Erdos problem #728 was solved more or less autonomously by AI”Original post: https://news.ycombinator.com/item?id=46560445&utm_source=wondercraft_ai(12:31): What happened to WebAssemblyOriginal post: https://news.ycombinator.com/item?id=46551044&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
A maioria das pessoas pensa nos matemáticos como solitários, trabalhando isoladamente. E, é verdade, muitos deles o fazem. Mas Paul Erdos nunca seguiu o caminho usual. Aos quatro anos de idade, ele podia perguntar quando você tinha nascido e depois calcular de cabeça o número de segundos que você tinha vivido. Mas ele não aprendeu a passar manteiga em seu próprio pão até completar vinte anos. Em vez disso, ele viajou pelo mundo, conhecendo um matemático atras do outro, colaborando em um número surpreendente de publicações. Com um texto simples e lírico e ilustrações ricas em camadas, esta é uma bela introdução ao mundo da matemática e um olhar fascinante sobre os traços de caráter únicos que fizeram do "tio Paul" um grande homem. Escrito por Deborah Heiligman, ilustrado por LeUyen Pham, e ainda não publicado no Brasil, por isso eu traduzi e adaptei especialmente pra esse episódio. Para acompanhar a história juntamente com as ilustrações do livro, compre o livro aqui: https://amzn.to/3FT99IT Se vc gostou, compartilhe com seus amigos e me siga nas redes sociais! https://www.instagram.com/bookswelove_livrosqueamamos/ E fiquem ligados, porque toda sexta-feira publico uma nova história. Até mais!
Prime Video just debuted a new series that's right on point for us horror fans, ESPECIALLY if you're a fan of the series Reaper! Stick with T as he works his way thru the new Kevin Bacon thriller, The Bondsman! | Marphos | Hub takes his son Cade to school to warn him about Lucky Callahan, a local troublemaker. However, the episode's events spiral into Hub and Cade tracking down a water-manipulating demon (Marphos) who has possessed Deirdre Donovan, a social media star. Hub uses Cade as a decoy to escape the local sheriff, leading to Cade's arrest and a family argument about Lucky and an upcoming trip to Nashville. | Erdos | Deputy Sheriff Briggs is possessed by the demon Erdos and extorts a food truck owner. Hub, while reminiscing about his past music career, learns of the possession and tracks down Briggs/Erdos. Meanwhile, Kitty plants evidence against Lucky, almost getting caught by Maryanne, who eventually helps Maryanne reconnect with Hub. Hub confronts the demon, and they both fight Briggs/Erdos, who ritualistically murders a hardware store assistant before immolating. Hub and Maryanne kill Erdos, and Hub reveals his demonic job and Lucky's attempt on his life. Kitty's planted evidence leads to Lucky's arrest, and Hub sends Maryanne and Cade to Nashville for protection from demons.
Lords: * Mitch * https://www.youtube.com/@HBMmaster * Nathan * https://store.steampowered.com/app/2976260/ChainStaff/ Topics: * I wish someone told me all classic Star Trek was based on writer Ursula LeGuin * Ryu Numbers and Tom Scott Numbers * https://www.tumblr.com/tomscottnumber * How to get out of a chair * Have you seen the new show? by Orcboxer * https://www.tumblr.com/orcboxer/745859389762764801/poob-has-it-for-you * OUR DRAWINGS - PRINCESS MOVIE | Full Animation Film | Artist * https://www.youtube.com/watch?v=G1LhdhQEKtg * https://www.tumblr.com/fullanimationfilmartist Microtopics: * Mitch from jan Misali. * The episode where we have beef. * The milk frother attachment to the chain staff. * The milk frother is DLC you pay extra for that milk frother. * Hearing about Ursula Leguin for the first time. * The Wind's Twelve Quarters. * The Clock Before Armageddon. * Solving sexism, here on Topic Lords. * Living long enough to discover a new favorite sci-fi author. * What's interesting about Sapphire for the PC Engine, other than the price? * Dreams of gutting the collectors market. * Spending $2,000 on a rare game and trying to convince itself it's good enough to justify the price. * A game that is not very rare but it's still expensive because it's so good that people want to keep it. * Deciding at the last minute to not use motion controls for your Wii rhythm game. * Erdos numbers preceding Bacon numbers because of course it was mathematicians who came up with that shit. * What counts as a Bacon Connection. * A character who has been in a lot of crossover games. * The de facto authority on Ryu numbers. * Whether the frog at the end of TXT World is the same as the game from Frog Fractions. * Whether Hatricia from the Hat DLC is the same as the cat the wedding dress in the photo at the end of TXT World. * The British buy who wears a red shirt. * The ever-shifting discourse for what counts as a connection for Scott Numbers. * Adding rules to make a trivial game into a non-trivial game. * White men who have had a long Youtube career. * Whether Ryu has a surname. * Stephen Hawking's Sabbath number. * Movies and plays tending to tell different stories whereas recorded and live music tends to be the same music. * The hypothetical guy whose favorite movie is just one where they pointed a camera at a stage play. * Why do people love squats? * Doing a 500 pound deadlift to get out of bed. * Doing one exercise to get better at a slightly related exercise. * Fucking up your knees by getting out of a chair repeatedly. * Doing the old heave-ho thing to get out of a chair. * The Inherently Beautiful Design of Everyday Objects, by Bonald Normag. * Watching the X-Files with your wife. * Aged Like Me. * Putting your knees under the chair and standing up, and unbending your knees pushes the chair backwards and it falls over, but you're upright, and then like the punching bags with sand at the bottom the chair bounces back up and hits you in the ass so you don't even need to work to start walking. * A wheelchair with an extremely gentle ejection seat. * Why obese people have worse COVID outcomes. * How to make a bed that fat people want to lie in face-down. * A.C. Slatering. * Why isn't Jim an industrial designer? * Applying for an industrial design job and putting sharks on your resume. * Poob has it for you. * Screenshotting a Tumblr post and cropping out the username to post it on Tiktok and claiming that it's something your therapist told you. * The ghost you're talking about waving its hands in your face being like "I'm right here!" * Tumblr eras. * The event that convinced the Tumblr community that Tumblr users should not ever be in charge of anything. * The people who left Tumblr when they banned porn and then came back when Elon Musk bought Twitter. * The Tumblr Funnymen. * The Tumblr CEO personally harassing trans women off of Tumblr. * Someone who looks like they've been deactivated. * The miracle of Tumblr still being online. * The Poster's Curse. * Bucket, where are you? * A jumble of keywords that someone might hypothetically search for. * Distinctly amateurish outsider art in a way that only a human could create. * Beatboxing puppy! * A contextless segue into a musical number. * An hour long trailer for a twenty minute movie. * A movie made by people who were figuring out 3D animation as they were making it. * Legally distinct Marios rapping. * Being anti-AI art because you are extremely pro copyright law. * The beatboxing puppy scene that everybody forgot about. * It's cool when people make art. * Four consecutive narrators all explaining the same concepts in slightly different ways that slightly contradict each other. * A movie asking you to watch it over and over to pump up its numbers. * Wanting to see a sequel to "OUR DRAWINGS - PRINCESS MOVIE | Full Animation Film | Artist" because you want to know what the title will be. * Complaining that Amazing Digital Circus is more important than your own movie.
On this edition, Billy is joined by James Erdos.
We welcome you with joy As companions on the journey of faith. We commit ourselves to fellowship and worship, service and witness, as partners in God's mission. We receive you as Christ has received us.
Does the idea of automating your business simultaneously fill you with optimism while also making your palms sweat with the fear of overwhelm as to where to start? Erin breaks down the first steps in how even the most overwhelmed advisor or even someone just starting out can start the process of automating their travel business. Want to work with Erin? You can find her at: www.erinfaith.com instagram.com/erinfaitherdos instagram.com/erinfaithsolutions Join our Facebook Group: https://www.facebook.com/groups/travelagentobjections
Film.UA's Victoria Yarmoshchuk, Paprika Studios' Akos Erdos, United Media's Tatjana Pavlović, Global Agency's Izzet Pinto and Drugi Plan's Nebojša Taraba at C21's inaugural Content Budapest event in the heart of the Hungarian capital.
Alex Kontorovich is a Professor of Mathematics at Rutgers University and served as the Distinguished Professor for the Public Dissemination of Mathematics at the National Museum of Mathematics in 2020–2021. Alex has received numerous awards for his illustrious mathematical career, including the Levi L. Conant Prize in 2013 for mathematical exposition, a Simons Foundation Fellowship, an NSF career award, and being elected Fellow of the American Mathematical Society in 2017. He currently serves on the Scientific Advisory Board of Quanta Magazine and as Editor-in-Chief of the Journal of Experimental Mathematics. In this episode, Alex takes us from the ancient beginnings to the present day on the subject of circle packings. We start with the Problem of Apollonius on finding tangent circles using straight-edge and compass and continue forward in basic Euclidean geometry up until the time of Leibniz whereupon we encounter the first complete notion of a circle packing. From here, the plot thickens with observations on surprising number theoretic coincidences, which only received full appreciation through the craftsmanship of chemistry Nobel laureate Frederick Soddy. We continue on with more advanced mathematics arising from the confluence of geometry, group theory, and number theory, including fractals and their dimension, hyperbolic dynamics, Coxeter groups, and the local to global principle of advanced number theory. We conclude with a brief discussion on extensions to sphere packings. Patreon: http://www.patreon.com/timothynguyen I. Introduction 00:00: Biography 11:08: Lean and Formal Theorem Proving 13:05: Competitiveness and academia 15:02: Erdos and The Book 19:36: I am richer than Elon Musk 21:43: Overview II. Setup 24:23: Triangles and tangent circles 27:10: The Problem of Apollonius 28:27: Circle inversion (Viette's solution) 36:06: Hartshorne's Euclidean geometry book: Minimal straight-edge & compass constructions III. Circle Packings 41:49: Iterating tangent circles: Apollonian circle packing 43:22: History: Notebooks of Leibniz 45:05: Orientations (inside and outside of packing) 45:47: Asymptotics of circle packings 48:50: Fractals 50:54: Metacomment: Mathematical intuition 51:42: Naive dimension (of Cantor set and Sierpinski Triangle) 1:00:59: Rigorous definition of Hausdorff measure & dimension IV. Simple Geometry and Number Theory 1:04:51: Descartes's Theorem 1:05:58: Definition: bend = 1/radius 1:11:31: Computing the two bends in the Apollonian problem 1:15:00: Why integral bends? 1:15:40: Frederick Soddy: Nobel laureate in chemistry 1:17:12: Soddy's observation: integral packings V. Group Theory, Hyperbolic Dynamics, and Advanced Number Theory 1:22:02: Generating circle packings through repeated inversions (through dual circles) 1:29:09: Coxeter groups: Example 1:30:45: Coxeter groups: Definition 1:37:20: Poincare: Dynamics on hyperbolic space 1:39:18: Video demo: flows in hyperbolic space and circle packings 1:42:30: Integral representation of the Coxeter group 1:46:22: Indefinite quadratic forms and integer points of orthogonal groups 1:50:55: Admissible residue classes of bends 1:56:11: Why these residues? Answer: Strong approximation + Hasse principle 2:04:02: Major conjecture 2:06:02: The conjecture restores the "Local to Global" principle (for thin groups instead of orthogonal groups) 2:09:19: Confession: What a rich subject 2:10:00: Conjecture is asymptotically true 2:12:02: M. C. Escher VI. Dimension Three: Sphere Packings 2:13:03: Setup + what Soddy built 2:15:57: Local to Global theorem holds VII. Conclusion 2:18:20: Wrap up 2:19:02: Russian school vs Bourbaki Image Credits: http://timothynguyen.org/image-credits/
The D&D 5e Actual Play campaign, Just Us League, is back! The party are trapped between a rock and hard place as they attempt to hide from Erdos, after escaping his clutches. Will the gang be able to outrun the various creatures who are after them? What is that mysterious cave in the middle of the forest? Can Flyte really harness the powers of the Gods to guide the party? Find out all this and more as we...ROLL THE DAMN DICE!!! Get your tickets for our Live Show, 'A Night of Magic, Misfits and Making Stuff Up', on Thursday 23rd February at the Playhouse Theatre, Cheltenham here: https://cheltplayhouse.org.uk/CheltPlayhouse.dll/WhatsOn?f=1258200 About Our Channel: Roll The Damn Dice is a D&D 5e Actual Play show and podcast. They are a Gloucestershire-based group of friends made up of performers, stand-up comics, and a plumber. Having started as a project to keep connected and entertained during the 2020 lockdowns, Roll The Damn Dice has ended up being featured at MCM London and Birmingham Comic Cons, and Cheltenham Literature Festival. ------------ Cast: Paul Avery as the Dungeon Master Luke Robins as Flyte, the Air Genasi Bladesinger Wizard Stephen Santouris as Rothgon Von Ryder, the Tiefling Pact of the Fiend Warlock/Aberrant Mind Sorcerer Toni Shaw as Orianna Von Ryder, the Tiefling Arcane Trickster Rogue Joy-Amy Wigman as Carouser Mawn, the Dragonborn Oath of Devotion Paladin Crew: Oliver Chapman as Camera Operator Matt White as Sound Technician Antonia Shaw as Editor Battle music (0:35:24-0:49:30) by Dom Jones: https://on.soundcloud.com/9HVrm ------------ Download the awesome Roll The Damn Dice theme tune, Freakshow, here: https://joy-amy.bandcamp.com/releases Stay connected with us by following us on our other platforms: https://www.linktr.ee/rollthedamndice Donate to our Ko-Fi page here, to allow us to continue making and improving our content: https://www.ko-fi.com/rollthedamndice ------------ Check out Holly Hammer, the extremely talented artist behind the wonderful character art this season: https://www.linktr.ee/hollyhammerart ------------ This season is sponsored by Critit UK, a Norwich-based, family-run business who specialise is wonderful RPG gaming good, all designed and made by themselves. You can get an exclusive 10% off your order with them if you use code 'RTDD10', or follow this link: https://www.critit.co.uk/discount/RTDD10 Check out their blog, where we post the last Friday of every month, here: https://critit.co.uk/blogs/news?contact%5Btags%5D=newsletter&form_type=customer#newsletter-blog-sidebar-0
Join us as we debut our homebrew sub-class, The Circle of the Broken, and discuss why all spies should be druids. For more information, please check out our website. Email: acoupleofcharacterspod at gmail dot com. Twitter, Instagram, Patreon: ACoCPodcast. Bookshop dot org storefront and gift cards. Use code CHOOSEINDIE on Libro.fm to receive a free audiobook when you purchase a subscription. Episode notes: Transcript. Erdos character sheet. Circle of the Broken. Circle of the Broken screen reader accessible. Dyslexia friendly versions: Transcript. Erdos character sheet. Circle of the Broken. Links to our homebrew Ancestry Options can be found on our homebrew page. A list of mentioned books can be found on the episode page. Mentioned episodes: Dexerus the Artificer. Pepsi the Wizard. Fantasy Name Generators. Dungeons & Dragons: Honor Among Thieves trailer. Knights of the Braille: Website. Podcast. Our guest appearance Part 1 & Part 2. Everyone-Games: See the full 2022 schedule. All event VODs can be found on YouTube. Learn more about Everyone-Games. You can still donate to Everyone Can UK and Stack Up US. Dungeons & Dragon Types: Website. Twitter. Cover art: Copyright Chandra Reyer 2019.
Introduction: Tunde Erdos holds a PhD in Business and Organisational Management, A Masters in Executive Coaching, A Masters in Translation & Simultaneous Interpreting and a Bachelor's degree in Law. She is an author of 3 books, a prolific speaker at conferences and has published articles in peer reviewed scientific journals and professional coaching magazines. Tunde's latest endeavour is a documentary on the Light and Shadow of Coaching and she produced this to raises funds for a Social Impact Initiative in Kenya. Podcast Episode Summary This episode explores the many facets of Coaching, our relationship to it and the often and many unexamined shadows that exist for coaches and the coaching profession. The phenomenon of Coaching Presence and our collective understanding on what Presence is and could be for coaching is discussed. The words, curiosity, relationality, power, presence and energy surface several times across this provocative conversation. Points made over the episode Tunde when asked to share a different story of her than the one I introduced is quick to share that she is joy, playful and full of expansion more than the knowledge perspective I shared with the listeners. There are so many facets to a person, so many selves that we approximate a diamond. Coaching does too. We are interactional human beings resonating, being stimulated and responding differently to whomever is present and in differently too depending on the contexts we live Tunde was quick enough to notice her own shadow operating her in the moment, where she was walking away from the direct question posed. Tunde recalls a dark moment, shameful moment in Coaching where her client was more present than she and it prompted her to explore Presence, Movement Synchronicity, and non-verbal communication in coaching through her PhD Some of the results from her research were surprising. Coaches with more education, more advanced training are more reactive and defensive of their practice. Tunde's process research, which looked at the energy between coach and client, the coaches self-regulatory capacity after a coaching session, and the many interviews with coaches and feedback sessions given on various noted observations from video recordings, showcased this phenomenon that was surprising. Another research finding and a shadow of coaches, Tunde calls the Snow White Phenomenon where she reframed the famous expression the queen uses in the movie, Mirror Mirror on the wall who is the fairest of them all to, Mirror Mirror on the wall who is the most present of them all. The light of coaching is well documented and researched. We know Coaching is a powerful tool for growth, development, learning, change and transformation. We know and understand this. We are in love with Coaching, so in love that is too is a shadow. We have to be willing to be curious about our attachment to coaching in this direction. Some coaches like to think they know the “Ideal Client” but Tunde's research found that often the energy between coach and client in an “ideal” scenario was asynchronistic. In terms of our understanding of Presence, it diminishes over time. Coaches put a lot of effort at the beginning of a session to be present but they confuse the relationality of presence. Curiously the effort we expend in this way to show up creates a lot of energy but also a lack of dissonance. The ICF Ignite program aims to anchor coaching beyond 1:1 Coaching, beyond Team Coaching to be seen as a social impact tool Tunde's documentary's main purpose is to raise funds for a Social Impact Initiative she is developing to support women in Kenya, through coaching to become entrepreneurial. The documentary also serves another purpose, to shine a light on the shadow side of coaching by way of several hundred interviews, exploring the contributions made by coaches and leaders in the field. Interestingly one contributor shared that he thought Coaches were too serious and then he himself refused to have a vignette of him practicing joy and presence be featured on the show. A Shadow, what we espouse we do not live. We are not very trusting of ourselves in this field. Another Shadow. We are also very disconnected from our humanity, from ourselves and whilst we are starting to use this wisdom we are very pre-occupied with ourselves as Coaches, trying to understand it from a cognitive space. We underestimate or we do not understand the power we wield in organisations and the negative consequences of our work. We do not fully appreciate the dynamic nature of organisations, the living systems we enter despite using several slogans in our literature. We have to question how responsible we are as Coaches in the way we use our power in systems. Some examples of this power include team members leaving a team when they discover they don't fit, or a team dissolving after coaching. Other examples of power include coaches asking clients to “take a deep breath' or similar when the same understanding around presence and mindfulness is not shared. There has been a huge growth in the use of internal coaches in organisations and a corresponding growth in the building of coaching cultures. Often these cultures do not protect internal coaches from the very systemic issues they are dealing with in coaching, parallel process for example. Supervision by an external supervisor is required. Tunde shared many wishes she would want for coaching and coaches. To have conversations and be curious about our shadow side To watch our pre-occupation with the future when the present is not well understood and where our understanding of concepts like presence are burgeoning. Words create worlds, are we too attracted to the future instead of the present, what drives this preoccupation? We pay attention to language in coaching and the words a client uses but we also need to pay much greater attention to the ways we are with each other. Tunde left the conversation grateful for the opportunity to share the social impact initiative she is about to launch for women in Kenya for my interest in it and also for the relationship we developed over the conversation. Resources shared www.coachingdocu.com www.mama.or.ke www.tundeerdos.com eBook on Coaching Presence
This is the last episode concluding our season 3 of the podcast. We interview President Chad Hanak about his entrepreneurial journey for Superior QC and get Ryan's industry perspective. Check out Superior QC: https://superiorqc.com/
Ken and David interview Drilling Fluid Engineer Michael Ashcraft to hear his perspective.
Tamas' travel record is extensive and impressive, but is there a dark side to getting a taste from everything? Tamas opened up about the dark side of travel, which he asserts can be an addiction like any other. As we explored the topic, tangents included the Myers-Briggs Type Indicator, learning new skills, sharpening metaphorical axes, psychedelics and, of course, possible solutions and takeaways. Follow Tamas on Twitter: Twitter.com/_Tamas
Ken and David run through RSS systems and where they stand in the marketplace.
Ken and David go back in time by reviewing the MWD 2021 Survey we conducted back in 2018. They discuss the questions that predicted what the industry would be like in the near future.
Cosa hanno in comune uno dei più importanti matematici del millennio trascorso e Kevin Bacon? Scopritelo in questa puntata di Mc2 a cura di Matteo Curti e Francesco Lancia.See omnystudio.com/listener for privacy information.
Ken and David break down Telemetry Compression and Encoding.
In this episode, Ken and David discuss projects and ideas that should require more R&D. Visit us at: www.erdosmiller.com
Dr. Fred is joined on the show today by friend and fellow psychiatrist, Dr. Brandon Erdos. Professionally, Dr. Erdos helps clients build concrete skills to move beyond traumatic experiences and difficult emotions and restore feelings of normalcy in daily life. His approach is based in psychodynamic, psychoanalytic, and Cognitive Behavioral Therapy. Today's episode, however, has little to do with psychiatry, and instead, explores common areas of interest between two friends, including personal development, cultivating the art of listening, the captive power of rhythm, and more. On today's show, you'll hear Dr. Fred and Dr. Erdos discuss: Why Dr. Erdos refers to himself as a "smart rat" The distinction between predicting and creating the future How improvisation within a structure applies to both music and life Enjoying the journey Why listening can be "dangerous" Learning to listen to one's listening The intersection of listening and drumming Why being willing to be "bad" at something is "good" Being willing to play The primal nature of drumming Episode Length: 00:49:12 BRANDON ERDOS'S RESOURCES Zencare Profile > https://zencare.co/provider/psychiatrist/brandon-erdos Contact > brandon.erdos at gmail dot com ALSO MENTIONED ON TODAY'S SHOW Landmark Education > https://www.landmarkworldwide.com Kenny Aronoff - https://kennyaronoff.com Road To Hana (Maui, Hawaii) > https://roadtohana.com/ Ringo Starr > https://en.wikipedia.org/wiki/Ringo_Starr Mitch Mitchell > https://en.wikipedia.org/wiki/Mitch_Mitchell Neal Peart > https://en.wikipedia.org/wiki/Neil_Peart In Memory Of Elizabeth Reed (Allman Brothers Band) https://www.youtube.com/watch?v=8SZlz9WKccE WELCOME TO HUMANITY RESOURCES Podcast Website > http://www.welcometohumanity.net/podcast PURCHASE DR. FRED'S BOOK (paperback or Kindle) > Creative 8: Healing Through Creativity & Self-Expression by Dr. Fred Moss http://www.amazon.com/Creative-Healing-Through-Creativity-Self-Expression/dp/B088N7YVMG FEEDBACK > http://www.welcometohumanity.net/contact
This week, I'm speaking to Eric Clough who is the founder of Manhattan-based design firm 212box. Eric was originally from the Midwest and spent his formative years in both Brussels and London. He graduated from Yale in 1999 and founded 212box in late 2000. 212box is actually made up of six companies, which are working in different disciplines and different sectors, and they have an incredible array of different projects. They worked in retail with people like Christian Louboutin, Sergio Rossi, Lego, and Erdos / 1436, and enlisting clients such as the von Furstenberg family and Philip Roth for residential. In this conversation, Eric goes into a lot of the philosophy and creativity, and how they merged that with the business aspects of 212box. 212box are brilliant narrators and storytellers with one of their projects involving hiding a series of clues and hidden narratives within one of the buildings. This week, I'm speaking to Eric Clough who is the founder of Manhattan-based design firm 212box. Eric was originally from the Midwest and spent his formative years in both Brussels and London. He graduated from Yale in 1999 and founded 212box in late 2000. 212box is actually made up of six companies, which are working in different disciplines and different sectors, and they have an incredible array of different projects. They worked in retail with people like Christian Louboutin, Sergio Rossi, Lego, and Erdos / 1436, and enlisting clients such as the von Furstenberg family and Philip Roth for residential. In this conversation, Eric goes into a lot of the philosophy and creativity, and how they merged that with the business aspects of 212box. 212box are brilliant narrators and storytellers with one of their projects involving hiding a series of clues and hidden narratives within one of the buildings. Still here? Eric has a little riddle for you: If anyone listening knows of a scented tree, which grows in groves of Jacaranda and has the sure-footedness of a springbok, the industriousness of a beaver, and the vision of an eagle, we hope that you will reach out to that tree and try to tell them that weʼd love to meet them. -If you can solve this riddle, get in touch! ► For your 28-day free trial of SweetProcess, go to https://www.SweetProcess.com/boa ► Access your free training at http://SmartPracticeMethod.com/ ► If you want to speak directly to our advisors, book a call at https://www.businessofarchitecture.com/call ► Subscribe to my YouTube Channel for updates: https://www.youtube.com/c/BusinessofArchitecture ******* For more free tools and resources for running a profitable, impactful, and fulfilling practice, connect with me on: Facebook: https://www.facebook.com/groups/businessofarchitecture Instagram: https://www.instagram.com/enoch.sears/ Website: https://www.businessofarchitecture.com/ Twitter: https://twitter.com/BusinessofArch Podcast: http://www.businessofarchitecture.com/podcast/ iTunes: https://itunes.apple.com/us/podcast/business-architecture-podcast/id588987926 Android Podcast Feed: http://feeds.feedburner.com/BusinessofArchitecture-podcast ******* Access the FREE Architecture Firm Profit Map video here: http://freearchitectgift.com Download the FREE Architecture Firm Marketing Process Flowchart video here: http://freearchitectgift.com Come to my next live, in-person event: https://www.businessofarchitecture.com/live Carpe Diem!
In which we are introduced to the most exclusive cinema-musico-academic club on earth, and Ken volunteers to provide a guest rap. Certificate #32655.
In this episode, we interview is the president, CEO, and co-founder of Cold Bore Technology from Cold Bore Technology. We hear about his journey in the Oil & Gas Industry.
Para este episodio me senté a conversar con Ismael Acosta Servetto. Ismael es estudiante de la Licenciatura en Ciencias Biológicas, y de la Licenciatura en Astronomía de la Facultad de Ciencias, Universidad de la República. Es miembro de la Origin of Life and Early-career Network (OoLEN) y forma parte del Young Scientist Program de Blue Marble Space Institute of Science (BMSIS), ambas redes científicas internacionales sobre astrobiología, química prebiótica y origen de la vida. Hablamos de cómo se estudian los fósiles precámbricos y por qué pasar tanto trabajo haciendo los análisis, de la vida en Marte y de las pelis de aliens. Links a cosas: Número de Erdos https://es.wikipedia.org/wiki/Número_de_Erdős Rover Rosalin Franklin https://inta.es/ExoMarsRaman/es/mision-exomars/rover-rosalind-franklin/ Young Scientist Program del Blue Marble Space Institute of Science [en inglés] https://www.bmsis.org/ysp/ Muchas gracias por escuchar UPDC. Si querés encontrar más episodios, suscribirte o dejar un comentario, lo podes hacer en ivoox.com, en Apple Podcast, o en otros lugares donde se consiguen podcasts. También está en Instagram y twitter (@podcastciencia en ambos).
On this episode we have Ken Miller (CEO/Founder - Erdos Miller) join us where we discuss how he went from a high school internship at Texas Instruments to betting on himself and making the jump to pursue something he was passionate about and hoping into the oilfield. Through the advice of one of his earliest mentors: There is no such thing as job security" he made the leap to start his own company. Ken has a wonderful view on building teams and instilling the corporate culture with the company's core values bleeding into every decision that is made. Really enjoyed hearing Ken's entrepreneurial attitude, outlook, and passion drive him to create an evolving successful culture as well as how he dives into all projects head first. Thank you Ken, truly enjoyed your time and we wish you and your team continued success!
In this episode, we interview R&D & Engineering Supervisor Hunter Simmons from Gordon Technologies, LLC. We hear about his journey in the Oil & Gas Industry.
In this episode, we interview MWD Manager Dave Harry from Drillers Directional. He gives us an insight look at the industry and his experience with MWD from the Canadian side.
We're back with our newest Season 3 podcast. In this episode, we interview MWD Manager Brad Barevich from Total Directional Services. He gives us an insight look at the industry and his experience with MWD.
We're back with our newest Season 3 podcast. On this episode, we interview Geologist Eli DenBesten. He gives us an insight look on the industry as a Geologist and his experience with MWD.
We're back with our newest Season 3 podcast. On this episode, we interview Field Specialist Clayton Carter. Clayton shares his Industry insight and the common issues he experiences in the field.
We're back with our newest Season 3 podcast. On this episode, we interview Clinton Moss CEO of Gunnar Energy. Clinton shares his experience as an entrepreneur and the breakdown of Magnetic Ranging.
We're back with our newest Season 3 podcast. This is the second episode that we've shot digitally. On this episode, we interview John Leitch CEO of Packers Plus. John shares his experience as an entrepreneur and the fundamentals of Completions.
We're back with our newest Season 3 podcast. This is the first episode that we've shot digitally. On this episode, we interview Stuart Mclaughlin CEO of Magnolia R&D. Stuart shares his experience as an entrepreneur and the fundamentals of coiled tubing. Visit: https://magnoliarnd.com/ Website: https://www.erdosmiller.com/
Pati is someone I’ve known since high school, whose studied and worked in Marketing and Communications over 15 years, including over 7 years at Apple.At one point she found an opportunity to fuel her passions and help others coaching them holistically, helping them transform from the inside out, connect intuitively, and manifest consciously.Takeaway lessons in this episode include:1. Put kindness first2. Value and follow a growth mindset3 . The subconscious mind is the main guide for everything. Train it.4. The Purest level of growth mindset is achieved with compassion 5. Fall in Love with your customer.6. Treat yourself like an experiment. Try everything.To connect with Pati:Website - www.livetotransform.com IG - @patierdoscoach Facebook - https://www.facebook.com/livetotransform/I'd love to hear from you! Also, I'd love it if you connected with me on LinkedIn or Instagram If you want your website redone, updated, and managed with unlimited updates for just $250/month (CRAZY GOOD DEAL RIGHT??), go to Manage My Website and hookup with one of the smartest, most talented guys I've ever met- THE Nathan Ruff.Support the show (https://connectwithpablo.com)
Director of an entertainment company and Project Management Consultant, Erin Erdos is the self-proclaimed "catchall drawer". Born with cancer, Erin also lost her father due to the same illness and has spent her lifetime battling that illness and significantly diminished eyesight. While trying to focus, quite literally, on her day-to-life she has been able to find the find the light in every darkness. Connect Erin @erinfaitherdos Connect with me @emilyhayden You can also connect with me on: YouTube Twitter My favorite supplements www.1stphorm.com/EmilyHaydenFitness My favorite ready-made healthy meals www.iconmeals.com (code EHFIT) *"Bay Breeze" music courtesy of the artist, FortyThr33
Director of an entertainment company and Project Management Consultant, Erin Erdos is the self-proclaimed "catchall drawer". Born with cancer, Erin also lost her father due to the same illness and has spent her lifetime battling that illness and significantly diminished eyesight. While trying to focus, quite literally, on her day-to-life she has been able to find the find the light in every darkness. Connect Erin @erinfaitherdos Connect with me @emilyhayden You can also connect with me on: YouTube Twitter My favorite supplements www.1stphorm.com/EmilyHaydenFitness My favorite ready-made healthy meals www.iconmeals.com (code EHFIT) *"Bay Breeze" music courtesy of the artist, FortyThr33
Anthony Lee is the Store Network Image Director (Store & Pop-Up store design and construction & Shop window display design) at Erdos Cashmere Group. He is based between Beijing & Paris. Erdos is the world’s largest manufacturer and retailer of cashmere apparel, with five brands: 1436, Erdos, 1980, Blue ERDOS and ERDOS Kids. Erdos markets its apparel through over 500 direct stores and over 1000 franchise stores. Anthony speaks about the sudden planning he had to do to adjust to our new situation.
In this episode I talk about a piece of land that nobody owns, six degrees of separation, and Hungarian scientist Paul Erdos.
Gabor Erdos is an Umpire for the British game, he shares stories and gives insight into the game from behind the mask! He also answers all (and there were lots!) of listeners questions. if you'd like to message Gabor or find out more about his free umpire webinars, you can reach him on @aIRdEE on twitter or to find out more about the webinars email umpiring@britishbaseball.org. Thank you to everyone who has contributed ideas and questions to the show, your support is greatly appreciated, if you want to come on and chat, message me and we'll talk! enjoy the show! If you want to befriend the show, you can find me on the social medias @britbaseballpod on twitter, FB and IG, come and say hi and send in show suggestions and feedback! Or if you want to keep it between me and you, the email is britishbaseballpodcast@gmail.com Thanks as always for your love, please make sure you don't miss an episode, subscribe on your favourite podcast app from below and please subscribe to the new YouTube channel for additional content and videos: YouTube - www.youtube.com/channel/UC3NoKq6eBPKW6u5BfACi0Zg Anchor (why not send me a voice message!)- https://anchor.fm/britishbaseballpodcast Breaker - https://www.breaker.audio/british-baseball-podcast Google podcast - www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy81MzI3MmVjL3BvZGNhc3QvcnNz Apple podcast (please leave a nice review) - https://podcasts.apple.com/us/podcast/british-baseball-podcast/id1493864870?uo=4 Overcast - https://overcast.fm/itunes1493864870/british-baseball-podcast Pocket casts - https://pca.st/kdfc2cm3 RadioPublic - https://radiopublic.com/british-baseball-podcast-GZBKqz Spotify - https://open.spotify.com/show/76L3sT1OaZmgir8rr0OiWn Castbox (please leave a nice review) - https://castbox.fm/channel/British-Baseball-Podcast-id2555190?country=gb
Phillip from our sales team picks David’s brain on common questions about MWD submitted from the public. Some of the questions compare MWD topics for today’s standards to the past standards and procedures used. Got any questions? Send them to podcast@erdosmiller.com ***Thank you to our Sponsor Gibson Reports!*** https://www.gibsonreports.com/productandservices Be sure to check them out for your Industry Report. Follow us on Facebook, LinkedIn, and Twitter. @erdosmiller https://www.erdosmiller.com/
Jay Cummings is an Assistant Professor of Mathematics at Sacramento State University. He received his Ph.D. in 2016 from the University of California, San Diego under the supervision of Ron Graham. Jay’s research is in many different flavors of combinatorics: enumerative, extremal, probabilistic, spectral and algebraic. He has an Erdos Number of 2 and is the Author of “Real Analysis: a Long-Form Mathematics Textbook”. His website is www.longformmath.com where you can find his Blog and information about his textbook and future books to come.Jay's personal website can be found here:http://webpages.csus.edu/Jay.Cummings/We would like to thank Jay for being on our show "Meet a Mathematician" and for sharing his stories and perspective with us!www.sensemakesmath.comPODCAST: http://sensemakesmath.buzzsprout.com/TWITTER: @SenseMakesMathPATREON: https://www.patreon.com/sensemakesmathFACEBOOK: https://www.facebook.com/SenseMakesMathSTORE: https://sensemakesmath.storenvy.comSupport the show (https://www.patreon.com/sensemakesmath)
Mike and Wes debate the merits and aesthetics of Clojure in this week's rowdy language check-in. Plus why everyone's talking about the sensitivity conjecture, speedy TLS with rust, and more!
Kelly Erdos is a clinical pharmacist that helps geriatric patients manage their medicine as they encounter more complex health conditions. Erdos answers questions about "why does it take so long to fill my prescription?" and "why can't we automate the pharmacists at drugstores?" We also discuss what does a pharmacist think about essential oils? and what does a pharmacist think about CBD?
Podcast di tecnologia e matematica. Divertirsi imparando con Mariano Pierantozzi
Sconosciuto alla maggior parte delle persone, ma chi lo conosce assicura che gli è cambiata la vita. In questo episodio affrontiamo la vita straordinaria di uno dei più prolifici matematici del 1900: Paul Erdos: l'uomo che amava solo i numeri. Se vuoi vedere il video vai sul mio canale. Se vuoi acquista re il libro, clicca qui: https://amzn.to/2KVcQPp
Susie Qin is from Erdos, Inner Mongolia, sat for her CPA exam in San Francisco, and now lives in Beijing. On this episode we cover a lot of subjects: the hard-drinking culture of Erdos, her unique experience in college, working for an infamous Chinese boss, her volunteer work at Shepard’s Field Children’s Village in Tianjin, and more. NOTE: This show contains a vivid story of domestic violence starting around 28 minutes in, so please be advised. It’s a special show, and I hope you enjoy. BLOG: https://www.crazyinagoodway.com/home/susie-q
David and Ken discuss how we design electronics that can sit right next to your apple pie in the oven.
David and Ken go over the basics of figuring out just how radioactive the rocks underground are.
In this panel episode, we talk about Erdos number, gender differences in job selection, parity in publishing, Word Police and self-censorship in outrage culture, the ethics of the N.F.L., Bird scooters and Mary Ellen Pleasant and Juana Briones. The journal article we dicuss: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0195298 Whisper at Night: http://www.whisperatnight.com
Protocols satisfying Local Differential Privacy (LDP) enable parties to collect aggregate information about a population while protecting each user's privacy, without relying on a trusted third party. LDP protocols (such as Google's RAPPOR) have been deployed in real-world scenarios. In these protocols, a user encodes his private information and perturbs the encoded value locally before sending it to an aggregator, who combines values that users contribute to infer statistics about the population. In this paper, we introduce a framework that generalizes several LDP protocols proposed in the literature. Our framework yields a simple and fast aggregation algorithm, whose accuracy can be precisely analyzed. Our in-depth analysis enables us to choose optimal parameters, resulting in two new protocols (i.e., Optimized Unary Encoding and Optimized Local Hashing) that provide better utility than protocols previously proposed. We present precise conditions for when each proposed protocol should be used, and perform experiments that demonstrate the advantage of our proposed protocols. About the speaker: Tiahhao Wang is a Ph.D. candidate at Purdue University working with Professor Ninghui Li. His research focuses on the practical aspects of privacy and security. Tiahhao's research has been published at top tier security venues such as USENIX and CCS, and his Erdos number is 3. His current research projects include local differential privacy and electronic voting.
In this episode we watch the movie Proof.Did Gwyneth Paltrow really prove the theorem? What theorem was she trying to prove? How many vaginas does a Time Lord have?All these questions and more are discussed in this weeks Maths at the Movies.If you're interested in watching Proof you can follow the Amazon link below. Further reading links:more about the amazing life of Sophie Germain;who was Erdos?;calculate your Erdos number;what are prime numbers good for?;the ideas behind the Four Colour Theorem;buy the book of the play.Subscribe via iTunes.
In 1971 high school student Juliane Koepcke fell two miles into the Peruvian rain forest when her airliner broke up in a thunderstorm. Miraculously, she survived the fall, but her ordeal was just beginning. In this week's episode of the Futility Closet podcast we'll describe Juliane's arduous trek through the jungle in search of civilization and help. We'll also consider whether goats are unlucky and puzzle over the shape of doorknobs. Intro: Before writing about time machines, H.G. Wells calculated that he'd earned a single pound in his writing endeavors. In 1868, as an engineering trainee, Robert Louis Stevenson explored the foundation of a breakwater at Wick. Sources for our feature on Juliane Koepcke: Juliane Diller, When I Fell From the Sky, 2011. "She Lived and 91 Others Died," Life 72:3 (Jan. 28, 1972), 38. "Jungle Trek: Survivor of Crash Tells of Struggle," Los Angeles Times, Jan. 6, 1972, A11. "Didn't Want to Steal: Survivor of Crash Passed Up Canoe," Los Angeles Times, Jan. 9, 1972, A7. Jennings Parrott, "The Newsmakers: It's Back to School for Peru Survivor," Los Angeles Times, March 20, 1972, A2. Werner Herzog, Wings of Hope, 2000. Dan Koeppel, "Taking a Fall," Popular Mechanics, February 2010. Jason Daley, "I Will Survive," Outside 29:9 (Sept. 1, 2004), 64. Stephan Wilkinson, "Amazing But True Stories," Aviation History, May 2014. Tom Littlewood, "The Woman Who Fell to Earth," Vice, Sept. 2, 2010. "Juliane Koepcke: How I Survived a Plane Crash," BBC News, March 24, 2012. Frederik Pleitgen, "Survivor Still Haunted by 1971 Air Crash," CNN, July 2, 2009. Sally Williams, "Sole Survivor: The Woman Who Fell to Earth," Telegraph, March 22, 2012. Katherine MacDonald, "Survival Stories: The Girl Who Fell From the Sky," Reader's Digest (accessed July 2, 2017). Listener mail: "America's First Serial Killer - H.H. Holmes," geocaching.com (accessed July 7, 2017). Colin Ainsworth, "Mystery in Yeadon: Who Is Buried in Serial Killer's Grave?" Delaware County [Pa.] Daily Times, May 21, 2017. Robert McCoppin and Tony Briscoe, "Is 'Devil in White City' Buried in Tomb? Remains to Be Unearthed to Find Out," Chicago Tribune, May 4, 2017. ShaoLan Hsueh, "The Chinese Zodiac, Explained," TED2016, February 2016. Wikipedia, "Erdős–Bacon Number" (accessed July 7, 2017). Erdos, Bacon, Sabbath. Natalie Portman (Erdős-Bacon number 7) co-authored this paper under her birth name, Natalie Hershlag: Abigail A.Baird, Jerome Kagan, Thomas Gaudette, Kathryn A. Walz, Natalie Hershlag, and David A.Boas, "Frontal Lobe Activation During Object Permanence: Data From Near-Infrared Spectroscopy," NeuroImage 16:4 (August 2002), 1120–1126. Colin Firth (Erdős-Bacon number 7) was credited as a co-author of this paper after suggesting on a radio program that such a study could be done: Ryota Kanai, Tom Feilden, Colin Firth, and Geraint Rees, "Political Orientations Are Correlated With Brain Structure in Young Adults," Current Biology 21:8 (April 2011), 677–680. This week's lateral thinking puzzle was contributed by listener Alon Shaham, who sent this corroborating link (warning -- this spoils the puzzle). You can listen using the player above, download this episode directly, or subscribe on iTunes or Google Play Music or via the RSS feed at http://feedpress.me/futilitycloset. Please consider becoming a patron of Futility Closet -- on our Patreon page you can pledge any amount per episode, and we've set up some rewards to help thank you for your support. You can also make a one-time donation on the Support Us page of the Futility Closet website or browse our online store for Futility Closet merchandise. Many thanks to Doug Ross for the music in this episode. If you have any questions or comments you can reach us at podcast@futilitycloset.com. Thanks for listening!
2017-01-30 Special EnglishThis is Special English. I&`&m Mark Griffiths in Beijing. Here is the news.Five hundred clean energy buses have been put to use in Tianjin, a major industrial city in north China.These public buses were jointly produced by Tianjin Bus Group and car maker BYD which is based in Shenzhen in Guangdong Province. The electric buses can run at least 200 kilometers after a full charge, enough for a bus to finish its daily task.The Bus Group also opened a major charging station, capable of serving 80 buses at one time and a total of 450 buses in a day. This is the largest charging station in the area which also includes Beijing and Hebei Province.Tianjin has 3,200 clean energy public buses. Among them, 1,300 are powered by electricity.China pins its hope on clean energy to reduce its dependence on coal and gas, which has been linked to the winter smog in northern China. Tianjin is among the cities with the poorest air quality.Since 2010, Tianjin has built 200 charging stations and 3,000 charging positions to encourage the use of clean energy transport. This is Special English.Australia&`&s flag carrier Qantas Airways&`& fatality free record in the jet age means it is the world&`&s safest airline, for the fourth year running.AirlineRatings.com announced the flag carrier atop its Top 20 list recently, followed by Cathy Pacific, Middle-eastern giant Etihad Airways, Singapore Airlines and local rival Virgin Australia, which are listed alphabetically.The website&`&s editor Geoffrey Thomas said that while those in the Top 20 are always leaders in safety, Qantas remains the leader in safety enhancements and operational excellence.Thomas said in a statement that over its 96-year history, Qantas has amassed an amazing record of firsts in safety and operations and is accepted as the world&`&s most experienced airline.Qantas has been the lead airline in virtually every major operational safety advancement over the past 60 years.Qantas was the leader in the Future Air Navigation System and the Flight Data Recorder developed by Australia&`&s chief scientific body to monitor the plane and crew performance. It also made advances in automatic landing and precision approaches in mountainous regions.The ratings website said Qantas was also the lead in real-time monitoring of its engines across its fleet using satellite communications, enabling problems to be detected before they become a major safety issue.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing. Following 28 years of talks between China and Russia, construction has finally begun on a highway bridge connecting China and Russia across the Heilongjiang River.Stretching some 1,300 meters, this is the first highway bridge between the two countries. A Chinese official says the bridge is an important part of the economic corridor linking China, Mongolia and Russia. The bridge will boost trade between China and Russia, as well as China&`&s investment in Russia.Economists expect that the bridge will benefit both Russia&`&s Far East and China&`&s initiative to revitalize the traditional industrial base of northeast China.With a total cost of 2.5 billion yuan, roughly 360 million U.S. dollars, the bridge is scheduled to open in 2019.This is Special English.China plans to further improve its space debris database and space debris monitoring facilities. That&`&s according to a recent white paper entitled "China&`&s Space Activities in 2016".The while paper said that in the next five years, China will improve the standardization system for space debris to further control near-earth objects and space climate.Efforts will be made to build a disaster early warning and prediction platform to raise the preventative capability.Research will be conducted on building facilities to monitor near-earth objects and to enable the country to monitor and catalog such objects. You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing. A study with critically ill, respirator-dependent patients showed that early in-bed cycling may help the patients recover more quickly during their stay in the hospital intensive care unit, or ICU.Canadian researchers say people may think that ICU patients are too sick for physical activity, but if patients start in-bed cycling two weeks into their ICU stay, they will be able to walk farther at hospital discharge.Lead researcher of the study Michelle Kho says their TryCYCLE study finds it safe and feasible to systematically start in-bed cycling within the first four days of mechanical ventilation and continue throughout a patient&`&s ICU stay.For over a year, Kho and her team conducted a study of 33 ICU patients at St. Joseph&`&s Healthcare Hamilton. The patients were 18 years of age or older, receiving mechanical ventilation, and walking independently prior to admission to the ICU.Kho said the study achievements even surprised the researchers, and the patients&`& abilities to cycle during critical illness exceeded their expectations. She adds that more research is needed to determine if this early cycling with critically ill patients improves their physical function.This is Special English.U.S. scientists have created a material that can independently heal the damage caused by mechanical wear, hence extending the service life of devices. The material is a transparent and soft rubber-like ionic conductor which can stretch 50 times its original length.Researchers at the University of California found that the self-healing process of the material can finish within 24 hours at room temperature after being cut. The newly-designed material combined a polar, stretchable polymer with a mobile, high-ionic-strength salt. In addition to solving the instability problem, the material can also improve the decaying performance of the materials within the machinery. The researchers stressed the advantages of using the material in electrically activate transparent artificial muscles. Scientists have begun exploring the potential applications in other fields including robotics and medical research.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing. You can access the program by logging on to newsplusradio.cn. You can also find us on our Apple Podcast. If you have any comments or suggestions, please let us know by e-mailing us at mansuyingyu@cri.com.cn. That&`&s mansuyingyu@cri.com.cn. Now the news continues.A new study shows that 20 conditions make up more than half of all health care spending in the United States. The study examined spending on diseases and injuries.U.S. researchers tracked the costs associated with 155 conditions between 1996 and 2013. They found that a total of 30 trillion U.S. dollars was spent by Americans in personal health care over the 18-year period. Of these conditions, diabetes was the most expensive, totaling 101 billion dollars in diagnoses and treatments in 2013, while heart disease was the second most expensive, costing 88 billion dollars the same year.The study shows that costs associated with diabetes have grown 36 times more compared to those for heart disease. Heart disease was the number-one cause of death for the study period.The two conditions typically affect individuals who are 65 years of age and older. Back pain is the third-most expensive condition, primarily striking adults of working age.The three top spending categories, along with hypertension and injuries from falls, comprised 18 percent of all personal health spending, totaling 430 billion dollars in 2013.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing. Chinese and German archaeologists have found images of what they believe to be Arabian horses in cliff paintings dating back 2,000 years in the Yinshan Mountains of north China&`&s Inner Mongolia Autonomous Region.The images of Arabian horses have been found in a dozen cliff paintings, which also contain images of other animals and humans. The images are believed to be the oldest found to date.The horses are depicted in the paintings, with armor, leather saddles and stirrups.The pictures were painted around 210 B.C., when the nomadic Huns were at war with a nomadic tribe from north China.More than 10,000 ancient cliff paintings have been found in the Yinshan Mountains.Experts say the pictures suggest that the Huns had trade links with people in western Asia and northern Africa at that time.Earlier archaeological excavations in Erdos in Inner Mongolia unearthed bronze and pottery figurines of Arabian horses. This is Special English."The Ancient One" is going home.One of the oldest and most complete skeletons found in North America will be given back to Native American tribes in Washington State for reburial.President Barack Obama has signed a bill with a provision requiring the ancient bones known as Kennewick Man to be returned to tribes within 90 days.Experts estimate the remains found in 1996 on federal land near the Columbia River are at least 8,400 years old. The discovery triggered a lengthy legal fight between tribes and scientists over whether the bones should be buried immediately or studied.In 2015, new genetic evidence determined the remains were related to modern Native Americans.The bill transfers the skeleton, which the tribes call "the Ancient One", from the U.S. Army Corps of Engineers to the state archaeology department, which will give it to the tribes.The Yakama Nation is among the tribes that have pushed to rebury the bones in the manner their people have followed since ancient times. It took 20 years for the tribes to successfully fight for the return of the bones.You&`&re listening to Special English. I&`&m Mark Griffiths in Beijing. Australia&`&s Victorian iconic Great Ocean Road is set for a major upgrade after it was damaged by a number of natural disasters last year.The Victorian Government has announced that 38 million U.S. dollars will be spent on urgent repairs and safety upgrades to the 240-kilometer-long national heritage-listed road.The road runs along Victoria&`&s south-east coast, and was significantly damaged by bushfires at Wye River in December 2015 and January 2016, as well as a number of serious landslides in September caused by higher than average rainfall for the year.The upgrades for the popular tourist destination will be retainer walls, erosion prevention, rock fall netting, electronic traveler information signs, closed circuit television monitoring and real-time traffic counters.This is Special English.Eight out of 10 middle-aged people in England weigh too much, drink too much or don&`&t exercise enough.An analysis from Public Health England says modern life taking its toll on health.Public Health England has launched a campaign to reach out to 83 percent of men and women aged 40 to 60 who are either overweight or obese, exceed alcohol guidelines or are physically inactive.The aim of the campaign is to provide free support to help them live more healthily in 2017 and beyond.Modern life is harming the health of the nation with 77 percent of men and 63 percent of women in middle age, overweight or obese. Obesity in adults has shot up 16 percent in the last 20 years. A spokesman for Public Health England in London said many people also can&`&t identify what a healthy body looks like, suggesting obesity has become the new normal.The diabetes rate among this age group has doubled in this period in England.People were urged to consider the simple steps they could take to improve their health in the run up to the New Year, by taking an online quiz. The spokesman said people need to eat better, be more active, stop smoking and consider their drinking.This is Special English.(全文见周六微信。)
This is NEWS Plus Special English. I'm Liu Yan in Beijing. Here is the news. China has begun a national key research and development plan to streamline numerous state-funded scientific and technological programs. The plan focuses on research in fields vital to the country's development and people's well-being. The research fields cover agriculture, energy, the environment and health, as well as strategic fields key to industrial competitiveness, innovation and national security. The plan now covers 59 specific projects. It merges several prominent state sci-tech programs focused on key fields including biotechnology, space, information and energy. Breakthroughs of the programs included supercomputer Tianhe-1, manned deep-sea research submarine Jiaolong, and super hybrid rice. The plan aims to address low efficiency resulting from redundant programs. More than 100 projects will be merged into five plans, namely, natural science, major sci-tech, key research and development plan, technical innovation and the sci-tech human resources. The national key research and development plan is the first to be started. This is NEWS Plus Special English. Chinese scientists have developed a system to measure the leak rate for a vacuum environment which will be used in the country's third step moon exploration program. The measurement system will help scientists work out a better way to preserve samples from the moon, which are stored in a vacuum capsule, increasing the accuracy of research. The third step of the lunar exploration project involves taking samples from the surface of the moon and bringing them back to earth. The samples will be packed in a vacuum environment. The accuracy of measuring the finest leak in a vacuum capsule will have direct impact on the research result of the samples. The system will ensure a similar vacuum environment as found on the moon. It will also make sure that the two kilograms of samples remain uncontaminated on their way back to earth, preventing them from being affected by any kinds of environment changes, including extremely high or low temperatures. China has a three-step moon exploration project, namely, orbiting, landing and returning from the moon. Chang'e-5 lunar probe is expected to be launched around 2017 to finish the last chapter of the project. You are listening to NEWS Plus Special English. I'm Liu Yan in Beijing. Chinese researchers have successfully created autistic monkeys by implanting autism-related genes into monkey embryos. The monkeys are the world's first nonhuman primates to show the effects of autism. The study will play an important role in studying the pathology of the condition and exploring effective intervention and treatment. The research has demonstrated the feasibility of studying brain disorders with genetically engineered primates. That's according to neuroscientist Muming Poo, a foreign member of the Chinese Academy of Sciences, who is also a member of the National Academy of Sciences in the United States. Poo says for quite a long time, there has been little good drug innovation in autism due to the lack of suitable animal models. This work will allow researchers to conduct deeper studies into autism and the brain's working mechanism. Autism spectrum disorder is one of a range of neurodevelopment problems. People with the condition usually exhibit defects in social interaction, stereotyped repetitive behaviors, anxiety and emotional difficulties. In recent years, the incidence of autism has continued to rise globally, and there is no effective treatment. Around four in every 1,000 Chinese children between ages 6 and 12 have the condition. This is NEWS Plus Special English. Southwest China's Guizhou Province is expected to evacuate more than 9,000 people for the protection of the world's largest ever radio telescope before its completion in September. The evacuation is facilitated by a proposal delivered last year by members of the Guizhou Provincial Committee of the Chinese People's Political Consultative Conference, the top advisory body. The proposal asks the provincial government to remove local homes less than 5 kilometers away from the Aperture Spherical Telescope, to create a sound electromagnetic wave environment. Guizhou is expected to resettle people from two counties in four settlements by the end of September. Each of the involved residents will get 12,000 yuan, roughly 1,800 U.S. dollars subsidy for the resettlement; and each ethnic minority household with housing difficulties will get 10,000 yuan subsidy. Construction of the telescope began in March 2011 with an investment of 1.2 billion yuan. Upon completion, the telescope, which is 500 meters in diameter, will become the world's largest of its kind. It will overtake the one in Puerto Rico, which is 300 meters in diameter. You are listening to NEWS Plus Special English. I'm Liu Yan in Beijing. Central China's Hunan Province is offering a reward to anyone who can decode the inscription on the back of six ancient gold coins. The Cultural Relics Bureau of Jinshi City has offered 10,000 yuan, roughly 1,500 U.S. dollars, to anyone who can explain the mystery of the coins, housed in the city's museum. A small white glazed pot containing six foreign gold coins was discovered at a farm in the 1960s and was sent to the museum in the 1980s. They are classified as top-level national cultural relics. These coins were manufactured using ancient Greek coinage method at least 650 years ago. The inscription on the front, in a rare type of Arabic, is the name of a King, but the information on the back remains unexplained. Cultural relics officials have consulted Chinese and foreign experts, but to no avail. This is NEWS Plus Special English. Online retailer Amazon China has unveiled its annual list of most romantic cities, with Zhengzhou in Henan Province declared the most romantic Chinese city of 2015. Zhengzhou led the country in the proportion of books sold last year on the topics of romance, relationship and marriage. Cities of Erdos and Baotou, both in north China's Inner Mongolia Autonomous Region, ranked second and third. Among the top ten, northern Chinese cities outnumbered the southern for the first time. According to Amazon, the result does not necessarily mean that people in northern China are more romantic than their southern counterparts; and the ranking reveals many factors, not just the cultural environment of a city. China's four first-tier cities, Beijing, Shanghai, Guangzhou and Shenzhen, did not feature in the top 40. Amazon says it appears that residents in smaller cities are under less pressure and have more leisure time to enjoy romantic literature. You're listening to NEWS Plus Special English. I'm Liu Yan in Beijing. You can access the program by logging onto NEWSPlusRadio.cn. You can also find us on our Apple Podcast. If you have any comments or suggestions, please let us know by e-mailing us at mansuyingyu@cri.com.cn. That's mansuyingyu@cri.com.cn. Now the news continues. Fewer fireworks were used across China in the Lunar New Year, as it was banned in many places over air pollution concerns. Two thirds of people polled in 35 major Chinese cities last year were in favor of fireworks bans at Spring Festival. The research was done by the center for public opinion research at Shanghai Jiao Tong University. Public concerns over air quality means people routinely check air quality and wear masks, and many own air purifiers at home. Data from the Ministry of Environmental Protection suggests that air quality only improved marginally last year in the area around Beijing. In Shanghai, fireworks are banned completely downtown, and firework purchases require real name registration to track violators. A total of 140 cities in China have banned fireworks, while another 540 cities have restrictions in place. Fewer fireworks have made sanitation workers' life easier. They cleaned up 80 percent less firework waste in Shanghai this year. In nearby Hangzhou, the host city of this year's G20 summit, fireworks have been banned for the whole year, and police have offered rewards for reporting any sales, storage, transportation or setting off of fireworks. But some people are concerned that the ban kills off a tradition, calling fireworks makers to develop more environmentally friendly alternatives. You're listening to NEWS Plus Special English. I'm Liu Yan in Beijing. Giant panda researchers in southwest China's Sichuan Province have named a pair of panda cubs, after receiving more than 3,000 responses. The winning names are "Olympia" and "Fuwa", and were posted by the president of the International Olympic Committee Thomas Bach. Both names came out on top after five pairs of names were put up for a final vote. "Fuwa" is the name of the mascots for the 2008 Beijing Olympic Games. After the twins were born in June, the Chengdu Giant Panda Breeding Center launched the project to solicit names for the cubs between July and September. More than 3,000 responses, including 900 from outside the Chinese mainland, were submitted through Sina Weibo microblog, messaging app WeChat and e-mail. The twin sisters have attracted great attention worldwide because of their famous family. Their mother "Kelin" is well known for a photo showing her watching a "panda porn" video. The photo was chosen by the United States' Time Magazine as one of the "Most Surprising Photos of 2013". The twins' grandfather "Cobi" was named by former president of the International Olympic Committee Juan Antonio Samaranch in 1992. This is NEWS Plus Special English. China's box office totaled 3 billion yuan, roughly 460 million U.S. dollars, during the Spring Festival holiday week, the highest compared with previous holidays. The film authority says the box office from Feb. 8 to 13 increased by 67 percent over the same period of last year. Three Chinese movies contributed to almost 94 percent of the box office in this period. Among them, "Mermaid", directed by Hong Kong comedian and director Stephen Chow, led the box office by making 1.5 billion yuan. "From Vegas to Macau III", starring Hong Kong actors Chow Yun-fat and Andy Lau, scored 680 million yuan, while "The Monkey King 2" took the third place with 650 million yuan. "Kung Fu Panda 3" was also a success, profiting 812 million yuan, since its screening on Jan. 29. China's box office earnings reached 44 billion yuan last year, up almost 50 percent over that of 2014. The number of audience totaled 1.3 billion, a year-on-year increase of 51 percent. China has been one of the most fast-growing film markets across the world. As more cinemas open in smaller cities and towns, going to watch movies becomes a lifestyle in those places. Experts say China may overtake the United States to be the world's largest film market in the next two to three years. This is NEWS Plus Special English. (全文见周日微信。)
In our first math episode, Dr. Amites Sarkar speaks with us about mathematical concepts that took the human race millennia to understand. On the other hand, the amazing things people in history did accomplish is mind blowing. Lastly, we discuss the similarity between Erdos numbers and Bacon numbers.Corrections:Tycho Brahe's nose was Brass not Bronze (I was close).Dr. Seth Rittenhouse is first a physicist and second a mathematician. Image - llustration at the beginning of Euclid's Elementa
Dr David Erdos, University of Cambridge delivers the first lecture from the "The General Shape of EU Internet Regulation After Google Spain" section of the "EU Internet Regulation After Google Spain" conference. This conference was held at the Faculty of Law, University of Cambridge on 27 March 2015, and brought together leading experts on Data Protection and Privacy from around the World. The conference was held with the support of the Centre for European Legal Studies (CELS). This entry provides an audio source for iTunes U.
Dr David Erdos, University of Cambridge delivers the first lecture from the "The General Shape of EU Internet Regulation After Google Spain" section of the "EU Internet Regulation After Google Spain" conference. This conference was held at the Faculty of Law, University of Cambridge on 27 March 2015, and brought together leading experts on Data Protection and Privacy from around the World. The conference was held with the support of the Centre for European Legal Studies (CELS). This entry provides an audio source for iTunes U.
This item discusses C-131/12 Google Spain; Google v Agencia Española de Protección de Datos (AEPD), Mario Costeja González (2014), the Court of Justice of the European Union's long awaited "right to be forgotten" case which examined the rights of individuals mentioned in public domain material indexed on Google search. This Court decision enunciated both the scope and breadth of data protection obligations in an even more expansive way than argued by the Agencia Espanola de Protection de Datos itself. It implies that Google acquires data protection obligations as soon as it collects information from the web and not just after it receives a request for deindexing. Moreover, Google appears to have absolute obligations to remove material in a variety of circumstances even if this is causing the individual mentioned no prejudice. It is particularly unclear how such obligations will operate vis-à-vis so-called sensitive data such as that concerning criminality, political opinion or health. The norms the Court articulated conflict markedly with those which are now mainstream online. Effective implementation will, therefore, depend less on legal technicalities than on how powerful such data protection norms are when placed alongside the vast cultural, political and economic power of "internet freedom". A further article on this subject was written on OpenDemocracy by Dr Erdos: http://www.opendemocracy.net/can-europe-make-it/david-erdos/mind-gap-is-data-protection-catching-up-with-google-search David Erdos is University Lecturer in Law and the Open Society in in the Faculty of Law and a Fellow in Law at Trinity Hall, University of Cambridge. David's current research explores the nature of Data Protection especially as it intersects with the right to privacy, freedom of expression, freedom of information and freedom of research. For more information about Dr Erdos, please refer to his staff profile: http://www.law.cam.ac.uk/people/academic/d-o-erdos/5972 Law in Focus is a collection of short videos featuring academics from the University of Cambridge Faculty of Law, addressing legal issues in current affairs and the news. These issues are examples of the many which challenge researchers and students studying undergraduate and postgraduate law at the Faculty.This entry provides an audio source for iTunes U.
This item discusses C-131/12 Google Spain; Google v Agencia Española de Protección de Datos (AEPD), Mario Costeja González (2014), the Court of Justice of the European Union's long awaited "right to be forgotten" case which examined the rights of individuals mentioned in public domain material indexed on Google search. This Court decision enunciated both the scope and breadth of data protection obligations in an even more expansive way than argued by the Agencia Espanola de Protection de Datos itself. It implies that Google acquires data protection obligations as soon as it collects information from the web and not just after it receives a request for deindexing. Moreover, Google appears to have absolute obligations to remove material in a variety of circumstances even if this is causing the individual mentioned no prejudice. It is particularly unclear how such obligations will operate vis-à-vis so-called sensitive data such as that concerning criminality, political opinion or health. The norms the Court articulated conflict markedly with those which are now mainstream online. Effective implementation will, therefore, depend less on legal technicalities than on how powerful such data protection norms are when placed alongside the vast cultural, political and economic power of "internet freedom". A further article on this subject was written on OpenDemocracy by Dr Erdos: http://www.opendemocracy.net/can-europe-make-it/david-erdos/mind-gap-is-data-protection-catching-up-with-google-search David Erdos is University Lecturer in Law and the Open Society in in the Faculty of Law and a Fellow in Law at Trinity Hall, University of Cambridge. David's current research explores the nature of Data Protection especially as it intersects with the right to privacy, freedom of expression, freedom of information and freedom of research. For more information about Dr Erdos, please refer to his staff profile: http://www.law.cam.ac.uk/people/academic/d-o-erdos/5972 Law in Focus is a collection of short videos featuring academics from the University of Cambridge Faculty of Law, addressing legal issues in current affairs and the news. These issues are examples of the many which challenge researchers and students studying undergraduate and postgraduate law at the Faculty.This entry provides an audio source for iTunes U.
This item discusses C-131/12 Google Spain; Google v Agencia Española de Protección de Datos (AEPD), Mario Costeja González (2014), the Court of Justice of the European Union's long awaited "right to be forgotten" case which examined the rights of individuals mentioned in public domain material indexed on Google search. This Court decision enunciated both the scope and breadth of data protection obligations in an even more expansive way than argued by the Agencia Espanola de Protection de Datos itself. It implies that Google acquires data protection obligations as soon as it collects information from the web and not just after it receives a request for deindexing. Moreover, Google appears to have absolute obligations to remove material in a variety of circumstances even if this is causing the individual mentioned no prejudice. It is particularly unclear how such obligations will operate vis-à-vis so-called sensitive data such as that concerning criminality, political opinion or health. The norms the Court articulated conflict markedly with those which are now mainstream online. Effective implementation will, therefore, depend less on legal technicalities than on how powerful such data protection norms are when placed alongside the vast cultural, political and economic power of "internet freedom". A further article on this subject was written on OpenDemocracy by Dr Erdos: http://www.opendemocracy.net/can-europe-make-it/david-erdos/mind-gap-is-data-protection-catching-up-with-google-search David Erdos is University Lecturer in Law and the Open Society in in the Faculty of Law and a Fellow in Law at Trinity Hall, University of Cambridge. David's current research explores the nature of Data Protection especially as it intersects with the right to privacy, freedom of expression, freedom of information and freedom of research. For more information about Dr Erdos, please refer to his staff profile: http://www.law.cam.ac.uk/people/academic/d-o-erdos/5972 Law in Focus is a collection of short videos featuring academics from the University of Cambridge Faculty of Law, addressing legal issues in current affairs and the news. These issues are examples of the many which challenge researchers and students studying undergraduate and postgraduate law at the Faculty.
This item discusses C-131/12 Google Spain; Google v Agencia Española de Protección de Datos (AEPD), Mario Costeja González (2014), the Court of Justice of the European Union's long awaited "right to be forgotten" case which examined the rights of individuals mentioned in public domain material indexed on Google search. This Court decision enunciated both the scope and breadth of data protection obligations in an even more expansive way than argued by the Agencia Espanola de Protection de Datos itself. It implies that Google acquires data protection obligations as soon as it collects information from the web and not just after it receives a request for deindexing. Moreover, Google appears to have absolute obligations to remove material in a variety of circumstances even if this is causing the individual mentioned no prejudice. It is particularly unclear how such obligations will operate vis-à-vis so-called sensitive data such as that concerning criminality, political opinion or health. The norms the Court articulated conflict markedly with those which are now mainstream online. Effective implementation will, therefore, depend less on legal technicalities than on how powerful such data protection norms are when placed alongside the vast cultural, political and economic power of "internet freedom". A further article on this subject was written on OpenDemocracy by Dr Erdos: http://www.opendemocracy.net/can-europe-make-it/david-erdos/mind-gap-is-data-protection-catching-up-with-google-search David Erdos is University Lecturer in Law and the Open Society in in the Faculty of Law and a Fellow in Law at Trinity Hall, University of Cambridge. David's current research explores the nature of Data Protection especially as it intersects with the right to privacy, freedom of expression, freedom of information and freedom of research. For more information about Dr Erdos, please refer to his staff profile: http://www.law.cam.ac.uk/people/academic/d-o-erdos/5972 Law in Focus is a collection of short videos featuring academics from the University of Cambridge Faculty of Law, addressing legal issues in current affairs and the news. These issues are examples of the many which challenge researchers and students studying undergraduate and postgraduate law at the Faculty. This entry provides an audio source for iTunes U.
Internet TV options, geo-location techniques, 32-bit vs 64-bit operating systems, Gmail configuration without tabs, chkdsk revealed, hardware vs software firewalls, Profiles in IT (Evi Nemeth, Godmother of UNIX administrators), Erdos number defined, 3D printing of high security keys, encrypted email in Germany (response to NSA data collection), NSA surveillance (the official position vs the Snowden leaks, NSA has not been truthful), Google goes dark for 2 minutes (worldwide Internet traffic drops by 40%), Apple patents may block the import of some Samsung devices, Bitcom now tracked with Bloomberg ticker, Bitcoin features (anonymity, lack of government monetary controls, no bank transaction fees, susceptible to manipulation, will be blocked by government if it become too successful), and Bitcoin money makers (Bitcoin ETF, Bitpay, Butterfly Labs, Coinbase, Silk Road). This show originally aired on Saturday, August 17, 2013, at 9:00 AM EST on WFED (1500 AM).
Internet TV options, geo-location techniques, 32-bit vs 64-bit operating systems, Gmail configuration without tabs, chkdsk revealed, hardware vs software firewalls, Profiles in IT (Evi Nemeth, Godmother of UNIX administrators), Erdos number defined, 3D printing of high security keys, encrypted email in Germany (response to NSA data collection), NSA surveillance (the official position vs the Snowden leaks, NSA has not been truthful), Google goes dark for 2 minutes (worldwide Internet traffic drops by 40%), Apple patents may block the import of some Samsung devices, Bitcom now tracked with Bloomberg ticker, Bitcoin features (anonymity, lack of government monetary controls, no bank transaction fees, susceptible to manipulation, will be blocked by government if it become too successful), and Bitcoin money makers (Bitcoin ETF, Bitpay, Butterfly Labs, Coinbase, Silk Road). This show originally aired on Saturday, August 17, 2013, at 9:00 AM EST on WFED (1500 AM).
Paul Erdos was one of the greatest mathematicians of the 20th Century, the one that other mathematicians measure their distance from, and beyond that one of the most…
The Infinite Monkeys, Brian Cox and Robin Ince, are joined on stage by special guest Stephen Fry and science writer Simon Singh to find out whether we really are only 6 degrees of separation from anyone else? What started as an interesting psychology experiment in connectedness, back in the 1960's, has not only taken on a life of its own in popular culture, but in the last 10 years has begun to influence everything from mathematics, to engineering and even biology. Brian and Robin look at how the concept of 6 degrees has influenced a whole new field of science and whether, in this age of social network sites such as Twitter and Facebook, we are in fact, far more connected than ever before. We also find out what Robin's "Bacon" number is. Whether Brian has an "Erdos" number, and whether, like Russell Crowe, any of the panel have successfully managed to combine the two. Producer: Alexandra Feachem.
... еще много-много всего интересного (например, психованный глава Microsoft при виде iPhone) в очень насыщенном выпуске подкаста MFcast. Усаживайтесь поудобнее и нажимайте на play, вам предстоит час погружения в увлекательный мир мобильных технологий и даже курьезов! :)Если вам нравится наш подкаст, не забывайте на него подписываться! Эту ссылку можно "скормить" плееру (например, iTunes или Juice), который каждую неделю будет скачивать для вас новый MFcast. А еще ссылку на наш RSS можно добавить в ваш ридер и каждый день читать наши инетересные новости, а по понедельникам-вторникам "ловить" свежие подкасты. Темы выпуска: - Смартфон Nokia Erdos из цельного куска нержавеющей стали - Samsung Star S5230W: теперь и с Wi-Fi - Доступен предзаказ ITG xpPhone, первого мобильника на базе полноценной Windows XP - "МегаФон" начинает продажи брендированного нетбука от Lenovo с 3G-модемом - Archos Phone Tablet: первый телефон от франзуского производителя мультимедиа плееров - Смартфоны Samsung Omnia переходят на WM 6.5, а Palm от Windows Mobile вообще отказывается - "Живые" фото HTC Leo (Windows Mobile 6.5, процессор 1 Ггц, емкостный мультисенсорный экран 4,3 дюйма) - Анонс и обзор бета-верии Opera Mini 5 - Apple iPhone 3GS: за 35-40 тысяч официально в России?! - Стив Баллмер отобрал у подчиненного iPhone и чуть его (телефон) не растоптал! - iTwinge: первая физическая клавиатура для iPhone - Как кенийские велотаксисты заряжают свои мобильники Подписка на MFcast: RSS
Mathematics and Physics of Anderson Localization: 50 Years After
Erdos, L (Ludwig-Maximilians-Universität München) Monday 15 December 2008, 16:30-17:30 Classical and Quantum Transport in the Presence of Disorder