Charles Torre travels around Microsoft to meet the company’s leading Architects and Engineers to discuss the inner workings of our core technologies. Going Deep is primarily concerned with how things work, why they are designed the way they are, and how they will evolve over time. Going Deep also in…
Sven Groot explains how the Windows Subsystem for Linux (WSL) can access and modify Linux files from Windows applications, going into deep dive level details on the underlying architecture and how it uses Plan 9's 9P protocol to act as a file server between Linux and Windows. Sven Groot is a developer on the Windows Subsystem for Linux, and he is joined by Craig Loewen: a program manager on the Windows Subsystem for Linux.Craig Loewen's Blog Post here.
It's always great to spend some time geeking out with Bart De Smet. As usual, he has a lot of technical details to share and only so much whiteboard real estate. Bart is still deeply engaged with Rx (evolving it, putting it to new uses, making it even more general and capable). How so, you ask?Well, ask Cortana. She will tell you that, in fact, Rx is one those wonderful things that make her so asynchronously capable and reliable at managing your calendar and the growing list of other personal things that you have her do for you. (I haven't actually asked Cortana this, so this is an exercise for the reader...)Bart, how exactly is Rx used in Cortana?Remember our old friend IQbservable?Look, a whiteboard! (But, of course, we have catching up to do beforehand. Be patient.)Tune in. Enjoy.
What happens when .NET code is statically compiled to machine code (versus runtime compiled via JIT) by the VC++ back end compiler? You get highly optimized binaries that load and run faster than .NET code ever has before. Yes, my friends, .NET has gone native! :) Today, the .NET team is releasing a preview of their new compiler technology, .NET Native. You can generate .NET native binaries for Windows Store apps only (in this preview). Tune in and meet a few key members of the .NET Native team, PM Mani Ramaswamy and Dev Lead Shawn Farkas. We go deep and Shawn spends quality time at the whiteboard. The team has done a lot of work to get where they are today and no part of .NET has gone untouched, from a new CLR to optimized BCL. This project is a natural extension of the MDIL work that was done for Windows Phone 8. It's all about highly optimized .NET for modern hardware - that the VC++ back end is turning IL into highly optimized machine code is a very, very good thing - for developers and, especially, users! Note: Shawn and a fellow engineer will be on C9 Live at build on Day 3, so please watch this and prepare questions to ask them live, right here on C9 (details to follow).Go native!
Code Digger is a lightweight version of Pex that allows you to explore public .NET methods in Portable Libraries directly from the Visual Studio 2012 code editor. It's a highly simplified and nifty way to leverage the power of Pex and Z3, one of the world's fastest constraint solvers.So, how does Code Digger actually work? Why the PCL requirement? What happens when you click on the magic button, Alice?Nikolai Tillmann and Peli de Halleux, software developers extraordinaire on MSR's RiSE team, join us again to dig into Code Digger in a casual setting (Nikolai's office, so native habitat). There is lots of geeking out at the whiteboard, of course. There is also a brief demo at the end. Tune in.
Immutable Collections are a new set of immutable types for .NET. We covered the high level aspects of this new technology a few months back when Erik Meijer interrogated (in his friendly way) the PM of the project, Immo Landwerth, and the lead developer, Andrew Arnott. Since this time, they have received a lot of feedback (thank you!) and have also been busy refining and optimizing their code. Here, Andrew and Immo go deep into how this stuff works and why it's designed the way it is. We talk about how to use these new types and how not to. We learn what the team has been working on and may work on for future releases. As is the case with any Going Deep episode, this is long form conversation and, well, deep. Tune in!More on Immutable Collections (download the preview versions via NuGet):The NuGet package preview includes these types:ImmutableStack ImmutableQueue ImmutableList ImmutableHashSet ImmutableSortedSet ImmutableDictionary ImmutableSortedDictionary Interfaces for each of these types are also defined to facilitate exchange of immutable collection types that may be implemented differently to optimize for very specific performance or memory requirements.See Andrew's blog for more detailed information (on immutable types for .NET and more. Lots of great info...).
ActorFx is an MSOpenTech open source project with the goal of providing a non-prescriptive, language-independent model of dynamic distributed objects. This will in turn provide a framework and infrastructure on top of which highly available data structures and other logical entities can be implemented.ActorFx (aka Ax) is based on the idea of the Actor Model developed by Carl Hewitt. Erik Meijer figured this model would fit perfectly into the realm of managing data in the cloud. See his paper on the topic, which is the basis for the ActorFx project. You can learn more about the Actor Model in this Channel9 video with Carl and Erik.Here, the lead developers of ActorFx - Brian Grunkemeyer and Joe Hoag - join us to dig into some of the details of the technology. We also discuss the potential of Actors in the cloud, the problems they solve, how you program with them on the client (CloudList is an interesting "cloud-enabled" type, for example), and potential applications of this approach to scalable distributed computing.
Herb Sutter presents atomic Weapons, 2 of 2. This was filmed at C++ and Beyond 2012. As the title suggests, this is a two part series (given the depth of treatment and complexity of the subject matter). STOP! => Watch part 1 first!Download the slides.Abstract:This session in one word: Deep.It's a session that includes topics I've publicly said for years is Stuff You Shouldn't Need To Know and I Just Won't Teach, but it's becoming achingly clear that people do need to know about it. Achingly, heartbreakingly clear, because some hardware incents you to pull out the big guns to achieve top performance, and C++ programmers just are so addicted to full performance that they'll reach for the big red levers with the flashing warning lights. Since we can't keep people from pulling the big red levers, we'd better document the A to Z of what the levers actually do, so that people don't SCRAM unless they really, really, really meant to.Topics Covered:The facts: The C++11 memory model and what it requires you to do to make sure your code is correct and stays correct. We'll include clear answers to several FAQs: "how do the compiler and hardware cooperate to remember how to respect these rules?", "what is a race condition?", and the ageless one-hand-clapping question "how is a race condition like a debugger?"The tools: The deep interrelationships and fundamental tradeoffs among mutexes, atomics, and fences/barriers. I'll try to convince you why standalone memory barriers are bad, and why barriers should always be associated with a specific load or store.The unspeakables: I'll grudgingly and reluctantly talk about the Thing I Said I'd Never Teach That Programmers Should Never Need To Now: relaxed atomics. Don't use them! If you can avoid it. But here's what you need to know, even though it would be nice if you didn't need to know it.The rapidly-changing hardware reality: How locks and atomics map to hardware instructions on ARM and x86/x64, and throw in POWER and Itanium for good measure – and I'll cover how and why the answers are actually different last year and this year, and how they will likely be different again a few years from now. We'll cover how the latest CPU and GPU hardware memory models are rapidly evolving, and how this directly affects C++ programmers.
Herb Sutter presents atomic Weapons, 1 of 2. This was filmed at C++ and Beyond 2012. As the title suggests, this is a two part series (given the depth of treatment and complexity of the subject matter).Part 1 -> Optimizations, races, and the memory model; acquire and release ordering; mutexes vs. atomics vs. fencesDownload the slides.Abstract:This session in one word: Deep.It's a session that includes topics I've publicly said for years is Stuff You Shouldn't Need To Know and I Just Won't Teach, but it's becoming achingly clear that people do need to know about it. Achingly, heartbreakingly clear, because some hardware incents you to pull out the big guns to achieve top performance, and C++ programmers just are so addicted to full performance that they'll reach for the big red levers with the flashing warning lights. Since we can't keep people from pulling the big red levers, we'd better document the A to Z of what the levers actually do, so that people don't SCRAM unless they really, really, really meant to.Topics Covered:The facts: The C++11 memory model and what it requires you to do to make sure your code is correct and stays correct. We'll include clear answers to several FAQs: "how do the compiler and hardware cooperate to remember how to respect these rules?", "what is a race condition?", and the ageless one-hand-clapping question "how is a race condition like a debugger?"The tools: The deep interrelationships and fundamental tradeoffs among mutexes, atomics, and fences/barriers. I'll try to convince you why standalone memory barriers are bad, and why barriers should always be associated with a specific load or store.The unspeakables: I'll grudgingly and reluctantly talk about the Thing I Said I'd Never Teach That Programmers Should Never Need To Now: relaxed atomics. Don't use them! If you can avoid it. But here's what you need to know, even though it would be nice if you didn't need to know it.The rapidly-changing hardware reality: How locks and atomics map to hardware instructions on ARM and x86/x64, and throw in POWER and Itanium for good measure – and I'll cover how and why the answers are actually different last year and this year, and how they will likely be different again a few years from now. We'll cover how the latest CPU and GPU hardware memory models are rapidly evolving, and how this directly affects C++ programmers.Part 2 -> Restrictions on compilers and hardware (incl. common bugs); code generation and performance on x86/x64, IA64, POWER, ARM, and more; relaxed atomics; volatile
Chris Stevens is a software developer on the Windows kernel team working on the Windows boot environment. Windows 8 boots faster than any other version of Windows. Why? How? Chris begins with the fundamentals (so, if you don't know anything about the boot process or what actually happens when an OS like Windows starts up, then you will after watching this...) and then digs into how the boot experience/environment/process was has evolved in Windows 8.Tune in!
Jon Berry, a veteran Windows engineer, digs into the new way Windows 8 manages processes to support the brave new world of Windows running on various CPU architectures including ARM and ATOM, which present an interesting set of technical challenges given the need to aggressively preserve energy when running—yet not fully running—while in a battery-powered state.Jon owns the Desktop Activity Moderator (DAM), which, as the name implies, moderates desktop processes. The DAM is one of several new features in Windows 8 designed to ensure consistent, long battery life for devices that support connected standby.Connected standby occurs when the device is powered on but the screen is turned off. In this power state, the system is technically always "on" (to support key scenarios like mail, VoIP, social networking, and instant messaging with Windows Store apps). It is analogous to the state a smart phone is in when the user presses the power button. As such, software (including apps and operating system software) must be well-behaved during connected standby. The DAM was created to suppress desktop app execution in a manner similar to the Sleep state. It does this by suspending or throttling desktop software processes across the system upon connected standby entry. This enables systems that support connected standby to deliver minimized resource usage and long, consistent battery life while enabling Windows Store apps to deliver the connected experiences they promise.The DAM is a kernel mode driver that is loaded and initialized at system boot if the system supports connected standby. How does Windows 8 provide this always-on experience and not drain the battery in 10 minutes? What does the DAM actually do? How does it work? The DAM is part of a larger management system, which Jon also describes here. What is connected standby, exactly? Jon spends a lot of time at the whiteboard answering these and other questions. Thank you, Jon!Tune in. Learn.
Herb Sutter presents C++ Concurrency. This was filmed at C++ and Beyond 2012.Get Herb's slides for this session.Herb says:I've spoken and written on these topics before. Here's what's different about this talk:Brand new: This material goes beyond what I've written and taught about before in my Effective Concurrency articles and courses. Cutting-edge current: It covers the best-practices state of the art techniques and shipping tools, and what parts of that are standardized in C++11 already (the answer to that one may surprise you!) and what's en route to near-term standardization and why, with coverage of the latest discussions. Blocking vs. non-blocking: What's the difference between blocking and non-blocking styles, why on earth would you care, which kinds does C++11 support, and how are we looking at rounding it out in C++1y? The answers all matter to you – even the ones not yet in the C++ standard – because they are real, available in shipping products, and affect how you design your software today.
Dr. Marko A. Rodriguez is the Founder and CEO of the graph technology firm Aurelius and creator of the graph traversal language Gremlin. He has focused his academic and commercial career on graph theory, network science, and graph-system architecture and development.Here, we learn about graph systems, database architectures, and high level graph theory. Tune in. Lots to learn!
The Windows heap manager has been around as long as Windows has, evolving with each release, getting faster, more reliable, and more secure. In Windows 8, the heap manager improves in two major areas: performance and security. In this video, Greg Colombo, a developer on the Windows kernel team working on the Windows heap manager, digs into the details. What are the changes that positively impact performance and security?This conversation—conducted entirely at the whiteboard—provides enough introductory information to ensure that even if you have no idea what a heap manager is—or what the heap is, for that matter—you will after you watch this. The complexity in this discussion increases over time, but remains understandable all the way through. Greg is an excellent communicator! Huge thanks to Greg for taking the time to educate us.Tune in. Learn.
Andrei Alexandrescu presents "Systematic Error Handling in C++". This was filmed at C++ and Beyond 2012Abstract:Writing code that is resilient upon errors (API failures, exceptions, invalid memory access, and more) has always been a pain point in all languages. This being still largely an unsolved (and actually rather loosely-defined) problem, C++11 makes no claim of having solved it. However, C++11 is a more expressive language, and as always more expressive features can be put to good use toward devising better error-safe idioms and libraries.This talk is a thorough visit through error resilience and how to achieve it in C++11. After a working definition, we go through a number of approaches and techniques, starting from the simplest and going all the way to file systems, storage with different performance and error profiles (think HDD vs. RAID vs. Flash vs. NAS), and more. As always, scaling up from in-process to inter-process to cross-machine to cross-datacenter entails different notions of correctness and resilience and different ways of achieving such.To quote a classic, "one more thing"! An old acquaintance—ScopeGuard—will be present, with the note that ScopeGuard11 is much better (and much faster) than its former self.Tune in. Learn. Thanks to Andrei, Herb and Scott for inviting C9 to film these wonderful sessions, rife with practical technical information for modern, professional C++ developers.Get the slides.
Continuing with our series of conversations with engineers in Windows, we meet Pedro Teixeira, a software developer on the Windows kernel team (aka core OS) who has improved the Windows thread pools in Windows 8. Thread pools are thread management subsystems (user mode and kernel mode) where threads are created and queued for any number of arbitrary tasks (work) required by applications and services. As it turns out, there are some significant improvements to the thread pool pattern in Windows 8. Pedro takes the time necessary - at the whiteboard for the entire interview - to dig into the details, beginning with first principles. So, if you don't really know what a thread pool is, then you will after the first 5 minutes of this interview. As the conversation progresses, the complexity will increase, but will remain suitable for most user mode application developers. Speaking of user mode, much of the time in this interview is spent on the Windows 8 user mode thread pool. The kernel mode thread pool is addressed towards the end of the conversation.In Windows 8, there is a new thread pool model and new thread creation policy. What is the new policy? How is the new user mode thread pool designed? How is it better than its predecessors? What does this all mean for developers?Tune in. Learn. Huge thanks to Pedro for taking the time to dig in - and for explain things in such a clear way.
Arun Kishan digs into the low level details of Windows 8's new application model.How has Process Lifetime Management (PLM) been reimagined in Windows 8? How does app suspension work, exactly, or, what happens when an app is no longer in the foreground and not closed? How much work can you do in the background when an app is suspended? Arun covers several topics here, so please do set aside some quality time. In return, you will gain new levels of deep understanding that will help you take advantage of the Windows Store App platform and build excellent modern Windows applications.You've met Arun before, so you should be prepared for some very deep treatment of this new world for Windows and Windows developers. This is an excellent 400 level investigation of the core changes that support the new app model.Huge thanks to Arun for another exceptional conversation and whiteboard session.Tune in. Learn.
By now you've learned that the CLR, Windows Phone Client, and Windows Phone Services teams got together to develop "Compiler in the Cloud". All Windows Phone 8 apps written in .NET technologies will get the benefit of this collaboration. The end goal? Really fast startup of Windows Phone 8 .NET apps."Compiler in the Cloud?", you ask. The idea is pretty simple. First, enter MDIL or Machine Dependent Intermediate Language or .NET hybrid assembly language. MDIL is all about compiling to native assembly instructions whenever possible, and compile the rest to pseudo instructions that can quickly be translated to native instructions on the phone. Thus, this assembly containing a mix of pseudo instructions and native instructions can be shipped to the device (and is portable across the same architecture - example, across all the ARM devices), and on the device we perform a light-weight linking step to convert the entire assembly to a native image. Most of the heavy lifting is done when we compile the IL assembly to the intermediate file between an IL assembly and a native image (this is what MDIL is). "So what?", you ask. The linking step on the device that converts MDIL assembly to a native image only takes 1/5th the time as traditional NGEN on device. Thus, we get some of the benefits of both pre-compilation (since we are executing off the native image where all instructions are assembly instructions) and JIT-compilation (no heavy compilation on the device during framework updates).Tune in to meet the program manager for code generation in .NET, Subramanian (Mani) Ramaswamy, and one of the lead developers of "Compiler in the Cloud", Peter Sollich. Peter is an expert in precompilation. We go quite deep here with plenty of whiteboarding. Peter teaches us exactly what MDIL is and why it's designed the way it is. We also talk about the higher level meaning in this (apps start fast, at native speed!). All around, it's a great Going Deep episode. Take the time to watch and learn. Thanks Mani and Peter!!See Subramanian's BUILD 2012 session where he goes into detail on MDIL/Compiler in the Cloud and other performance/functionality improvement in .NET for Windows Phone 8.
Technical Fellow Steve Lucco (architect and lead engineer of IE's Chakra JS VM) and Google's V8 and Dart architect Lars Bak discuss JavaScript, from a virtual machine perspective (implementer's view point). IE and Chrome employ different strategies (although they do share some things in common) to make JavaScript execute faster. What are these strategies? How do Chakra and V8 differ? How are they similar? How fast can Lars and Steve make JavaScript go, anyway? What's the speed limit for JavaScript execution? What languages are used to write these VMs? (Hint, both start with C...)This is a candid technical conversation among two excellent software engineers tasked with making JavaScript run as fast as possible in their respective JS VMs. The conversation also includes a brief discussion on open source technologies.This was filmed at GOTO Aarhus 2012, an excellent developer event. Huge thanks to Lars and Steve for the excellent conversation and to the folks at GOTO for providing a room for me for all these interviews (and lights, too!).
TypeScript, a typed superset of JavaScript that compiles to idiomatic (normal) JavaScript, is designed to make it easier to write cross-platform, application scale, JavaScript that runs in any browser or in any host. It was announced recently while Anders Hejlsberg and other key members of the TypeScript team were attending and speaking at the goto conference (an excellent cross-platform developer event!). Needless to say, Channel 9 was there Google's V8 and Dart chief architect Lars Bak also happened to be at the event (he's currently leading the Dart team full time). Anders and Lars join us to talk candidly about TypeScript, JavaScript and Dart. Huge thanks to Anders and Lars for this excellent conversation.Tune in. Enjoy.
Scott Meyers presents "Universal References in C++11". This was filmed at C++ and Beyond 2012. This is the full session in all of its splendor. Huge thanks to Scott for allowing C9 to provide this excellent C++11 content to the world.From Scott's recently published article in the October 2012 edition of ACCU's Overload:Given that rvalue references are declared using "&&", it seems reasonable to assume that the presence of "&&" in a type declaration indicates an rvalue reference. That is not the case:Widget&& var1 = someWidget; // here, "&&" means rvalue reference auto&& var2 = var1; // here, "&&" does not mean rvalue reference template void f(std::vector&& param); // here, "&&" means rvalue reference template void f(T&& param); // here, "&&" does not mean rvalue reference In this article, I describe the two meanings of "&&" in type declarations, explain how to tell them apart, and introduce new terminology that makes it possible to unambiguously communicate which meaning of "&&" is intended. Distinguishing the different meanings is important, because if you think "rvalue reference" whenever you see "&&" in a type declaration, you'll misread a lot of C++11 code.Tune in. Scott's an incredible presenter and it's well worth your time to both read his article and watch his presentation on the subject. Great stuff!Download slides
Here is the Ask Us Anything panel from C++ and Beyond 2012.Andrei Alexandrescu, Scott Meyers and Herb Sutter take questions from attendees. As expected, great questions and answers!Tune in!Table of contents (click the time codes ([xx:xx]) to hear the answers...):Message passing primitives in future versions of the standard... [00:00]Standardized unit testing framework... [02:55]std::async... [04:30]Standard modules proposal... [08:14]Keyword additions and the standard library... [09:35]Problems (and solutions) with exceptions... [12:50]Future of concepts... [22:34]std::thread and thread interruption... [23:03]When to use the auto keyword (and when not to...)... [25:03]More on auto (benefits of reduncancy, type conversion issues with bool to int?)... [29:31]const and multithreaded programming, in C++11 const means thread safe, too... [35:00]Yet more on auto (impact on rampant use and code readability/comprehension)... [42:42]Compiler type deduction information (compiler switch that prints out auto deduced type information)... [50:18]Printing out code for review that replaces auto with the actual type... [53:30]auto and dynamic memory allocation... [54:59]Useful, broadly-used concurrency libraries... [57:00]
Channel 9 was invited to this year's C++ and Beyond to film some sessions (that will appear on C9 over the coming months!) and have a chat with the "Big Three": Andrei Alexandrescu, Scott Meyers, and Herb Sutter. If you are a C++ programmer, then you know these names very well. If you've not heard of C++ and Beyond, well, put it down as a must-attend event (let's hope they do it again in 2013!). You can see material from last year's event here.At the end of day 2, Andrei, Herb and Scott graciously agreed to spend some time discussing various modern C++ topics and, even better, answering questions from the community. In fact, the questions from Niners (and a conversation on reddit/r/cpp) drove the conversation.Huge thanks to Andrei, Herb, and Scott for their time and wisdom. Thanks, too, to the Niners who asked great questions!Here's what happened:[00:00] Themes for C++ in 2012 and beyond (and C++ and Beyond 2012)[07:00] C++11 Efficiency and Concurrency/Parallelism (Standardization)[12:12] dot_tom asks: When can we expect standardized modern libraries like, XML, File system, Web Services?[15:00] ZippyV asks: Standardized modern libraries: What has the response been? Any unexpected requests?[17:17] static if[26:26] Matt_PD asks: Future of template metaprogramming? Standardizing static loops?[40:07] More on template metaprogramming (and static if and enable_if)...[50:05] async/await language feature in C++ would be nice, C&B 2013?
Rx 2.0 is RTW! Get it here.I caught up with Bart at his whiteboard (of course) to discuss the significance of this release as well address some of the great additions to Rx as outlined below (many of the topics below have been discussed in depth in other Rx interviews with Bart.) We also talk about the new experimental build shipping model. Much of the time is spent talking about the portable libraries architecture for Rx for Windows 8, .NET 4.5, WP7/7.5 and beyond. Bart has been very, very busy and as usual his engineering is golden.Tune in! It's always a pleasure to geek out with Bart. So much to learn. Congratulations to the Rx team!!!The highlights of Rx 2.0 include:Support for building Windows Store apps for Windows 8. This includes primitives to synchronize with the Windows XAML CoreDispatcher and interop with WinRT events and IAsync* objects. Support for Portable Class Library projects, allowing code reuse across ".NET Framework 4.5" and ".NET Framework 4.5 for Windows Store apps" projects. We're planning on adding Windows Phone 8 support to this going forward. Integration with the new C# 5.0 and VB 11 "async" and "await" features. In Rx v2.0, you can await an observable sequence, allowing one to apply the power of Rx to the new asynchronous programming model. Enormous performance improvements, with a 4x speedup of the query pipeline, vastly reduced object allocation rates, massively increased throughput of schedulers, and much more. An improved error handling strategy, enabling higher resiliency and proper resource cleanup for queries in the face of user errors at various levels. Thorough revisit of the way we deal with time, to improve efficiency and predictability. This includes better support for periodic timers, improvements to absolute time scheduling, etc. Various new and improved query operators.
Bart De Smet is back and he's going to go deep into improvements made to Rx 2.0 RC (so, Rx 2.0 getting close to coming out of the oven!). As you'd expect, Bart and company have been very busy since Rx 2.0 Beta - lots of performance and reliability improvements and some heavy work in how Rx manages time, new error handling capabilities and event subscription improvements for Rx running on WinRT.Most of the time is spent at the whiteboard - very comfortable and natural place for Bart! Note: there is a lot of time in this interview, both in terms of interview length and the notion of time itself. Use at your own risk and watch out for unexpected wormholes.More on Rx 2.0 RC:This new release of Rx includes a number of improvements to the way we deal with time. As you likely know, dealing with time is a complex undertaking in general, especially when computers are involved. Rx has a lot of temporal query operators to perform event processing, and therefore it needs to be able to schedule work to happen at particular times. As a result, notions of time exist in virtually any layer of the system: from the schedulers at the bottom (in System.Reactive.Core) to the query operators at the top (in System.Reactive.Linq). [Bart De Smet]Download page: https://go.microsoft.com/fwlink/?LinkID=255295 Bart's epic blog post: https://blogs.msdn.com/b/rxteam/archive/2012/06/17/reactive-extensions-v2-0-release-candidate-available-now.aspx
At Lang.NEXT 2012, several conversations happened in the "social room", which was right next to the room where sessions took place. Our dear friend, Erik Meijer, led many interesting conversations, some of which we are fortunate enough to have caught on camera for C9. We'll begin with these Expert to Expert episodes with a "standing" conversation (participants stand comfortably close to the whiteboard) with computer scientists Carl Hewitt, Visiting Professor at Stanford University, creator of the Planner programming language, inventor of the Actor Model (the topic of this conversation), Clemens Szyperski, an MSR scientist working in the Connected Systems Group and Erik.What are actors, exactly? No, really. What are they? When is an actor an actor? Everything you wanted to know about actors, but we're afraid to ask... It's all right here. Big thanks to Carl, Clemens and Erik. This is an excellent E2E(2E)!