POPULARITY
Les antidépresseurs inhibiteurs sélectifs de la recapture de la sérotonine (ISRS) sont largement prescrits pour traiter la dépression et l'anxiété. Cependant, près de la moitié des patients sous traitement rapportent un effet secondaire courant : un *émoussement émotionnel*, où les émotions, qu'elles soient positives ou négatives, semblent atténuées. Des chercheurs ont récemment identifié un mécanisme sous-jacent à ce phénomène : l'altération de l'apprentissage par renforcement, un processus essentiel dans notre interaction avec l'environnement.Le rôle de la sérotonine dans l'apprentissage émotionnelL'apprentissage par renforcement est un processus fondamental du cerveau qui nous permet d'associer nos actions aux conséquences qu'elles entraînent. Lorsque nous vivons une expérience positive, notre cerveau renforce l'association entre cette action et la récompense obtenue, nous incitant ainsi à répéter le comportement à l'avenir. Ce mécanisme repose en grande partie sur la dopamine, mais la sérotonine y joue également un rôle clé.Les ISRS augmentent les niveaux de sérotonine en bloquant sa recapture dans les synapses, ce qui régule l'humeur et réduit les symptômes dépressifs. Cependant, cette augmentation affecte aussi l'apprentissage par renforcement en réduisant la sensibilité du cerveau aux récompenses.Une diminution de la réponse aux stimuli émotionnelsDans une étude récente, les chercheurs ont administré des ISRS à des volontaires en bonne santé pendant plusieurs semaines et ont analysé leur réponse à des tâches d'apprentissage par renforcement. Les résultats ont montré que les participants sous ISRS avaient plus de difficulté à adapter leur comportement en fonction des récompenses obtenues. En d'autres termes, ils éprouvaient moins de plaisir à recevoir une récompense, ce qui pourrait expliquer pourquoi les émotions positives sont atténuées sous antidépresseurs.Ce phénomène entraîne une réduction de la réactivité émotionnelle globale. Ainsi, si les patients ressentent moins intensément les émotions négatives (ce qui peut être bénéfique dans le cadre du traitement de la dépression), ils perçoivent aussi les émotions positives avec moins d'intensité.Vers une personnalisation des traitementsCes résultats suggèrent que l'émoussement émotionnel pourrait être un effet secondaire inévitable des ISRS, mais aussi qu'il pourrait être réduit en ajustant les doses ou en explorant d'autres classes d'antidépresseurs. Cette découverte ouvre ainsi la voie à des traitements plus personnalisés, qui cherchent à équilibrer efficacité thérapeutique et préservation des émotions positives. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Nesse episódio entrevisto o dr. Pedro Melo, que é psiquiatra (instagram: @drpedromelopsiquiatria) e vai nos ajudar a decidir qual antidepressivo usar para fogachos. Quando temos aquela paciente que não pode ou não quer terapia hormonal mas está bastante sintomática, podemos escolher ISRS ou IRNS para melhora dos sintomas. O Dr. Pedro vai nos dar dicas práticas e lapidar a teoria por trás desses medicamentos. Espero que gostem! Episódio patrocinado pela equipe MedCof e com o cupom ESPECULANDO vocês conseguem R$300 de desconto nos cursos para R+ de GO, Mastologia e para o TEGO: https://tego.grupomedcof.com.br
Cada vez más personas son usuarias de psicofármacos para tratar padecimientos como la depresión y la ansiedad. Los comúnmente llamados antidepresivos, en especial los inhibidores selectivos de la recaptación de serotonina (ISRS), son uno de los tratamientos más comunes. Sin embargo, junto con sus beneficios en el alivio de los síntomas emocionales, estos fármacos suelen traer consigo efectos secundarios que afectan la vida cotidiana, y entre los más preocupantes está el impacto en la vida sexual. Todos los miércoles, a las 8.20 am les espero en mi columna para hablar de sexualidad, vínculos y sexoafectividad en @notifyok, por @gambaonline @soniderafm Pueden encontrarme en Instagram como @lic.noeliabenedetto https://www.instagram.com/p/Cfq_p9jLuqy/?igshid=YmMyMTA2M2Y=
Artigo original - Link --- Nova Android & iOS app MGFamiliar - Link --- Subscreva o Podcast MGFamiliar para não perder qualquer um dos nossos episódios. Além disso, considere deixar-nos uma revisão ou um comentário no Apple Podcasts ou no Spotify. --- Music Summer Somewhere In Cuba · Cumbia Deli - Link
In this episode, Jennifer Cocohoba, PharmD, AAHIVP, and David Koren, PharmD, MPH, AAHIVP, FIDSA, cover the most impactful LA ART and LA PrEP clinical studies presented at AIDS 2024, including:The PURPOSE 1 study of twice-yearly lenacapavir injections vs daily oral tenofovir as PrEP in cisgender womenAn open-label extension of HPTN 084 evaluating LA CAB safety during pregnancy48-week data from the IMPAACT 2017/MOCHA study of LA CAB + RPV in adolescents with HIVA substudy of FLAIR evaluating the efficacy, safety, and tolerability of SC self-administration of LA CAB + RPV Presenters:Jennifer Cocohoba, PharmD, AAHIVPProfessor of Clinical PharmacyDepartment of Clinical PharmacyUniversity of California San Francisco School of PharmacySan Francisco, CaliforniaDavid Koren, PharmD, MPH, AAHIVP, FIDSAAdjunct Clinical ProfessorTemple University School of PharmacyClinical Pharmacist SpecialistTemple University Health SystemPhiladelphia, PennsylvaniaTo access all of our new podcast episodes, subscribe to the CCO Infectious Disease Podcast on Apple Podcasts, Google Podcasts, or Spotify. See the full program here.
According to a Forrester study, 53% of sellers said that sales technology positively impacts their results. So how can you optimize your enablement tech stack to drive productivity?Shawnna Sumaoang: Hi, and welcome to the Win Win Podcast. I am your host, Shawnna Sumaoang. Join us as we dive into changing trends in the workplace and how to navigate them successfully. Here to discuss this topic is Sarah Gross. Thanks for joining us, Sarah. I’d love for you to tell us about yourself and your background. Sarah Gross: Shawnna, thank you so much for having me, I’m excited to be here today. I have spent a decade in sales and a decade in enablement, and absolutely love both spaces. I started my career as a BDR, moved into sales management and got the typical dashboard and quota, and very quickly realized there was a lot more to sales and to managing salespeople.I ended up actually researching enablement and then got into being a practitioner myself. So, as I’ve built out teams over the years what’s been interesting is to watch how enablement went from L&D to how we think about making our team more productive, but we weren’t calling it productivity quite yet. We were like, “Oh, let’s get people from 100% to 110% of attainment” to now being the real drivers within the organization of how technology helps our team be more effective and therefore productive in what they’re doing day-to-day. SS: I love that and thank you again so much for joining us today. Now you and I have known each other for a while and I know that you focus deeply on a couple of core components when it comes to driving a business. That’s the people, processes, and technology. How does that approach guide the way that you build your enablement strategy? SG: Yeah, first things first, you have to have the right people in the right roles. So I’m a big fan of, especially in a sales organization, right? Having that BDR, SDR structure, the account executive, the SE. And what’s important from an enablement lens is being able to make sure that everybody knows their role and function. It leads to the process side of things. So if everyone knows what role they’re playing in the sales process, then it is defining what that process is, right? Less of the upside of what are our sales stages and how we forecast, but more of what are the gates through which I need to uncover information from the customer and our internal selling team to advance, and to know that I’m in the right place of moving this opportunity forward and spending more time and resources on it.How do we make sure that process then maps to the methodology that we’re using from a customer buying lens, as well as a selling buying lens? And then how do I make sure that we have the right technology, that I’m not asking my reps to copy and paste a Salesforce field into 15,000 different places, but rather Sales Navigator that’s layered with a Zoom info of the world and allows them to push information into that centralized repository in an easy way to build on account plans, to build on opportunity plans, and to have that structure of what good selling looks like. That’s the baseline, in my opinion, for what enablement has to have as the building blocks at the bottom of the house if you will. To then just start to do things like AI and getting into call recording and understanding what words are being used, when, where, and how.And devising an enablement plan that is by cohorts of people that need a specific topical area. SS: I love that you established that baseline. What are some challenges that you’ve encountered as a leader when it comes to creating an effective enablement strategy, though? And how did you overcome those? SG: The strategies have changed over the years. I’d say at the beginning of enablement, early 2010's, right? It was a lot about what is it. Why is it different than L&D? Why would we invest in this different team and structure? Then we evolved into, okay, they, they run the LMS, right? They’re the people that give the training. And they may have an action item coming out of it. And then we’ve gone into this world of, Okay, we have technologies that are supporting enablement now, which is fantastic. They’re designed for enablement, and they’re giving us the right level of analytics. But how do we, as practitioners, continue to stay that we’re elevating the value of our practice? I still see where people are just trainers or they’re just go-to-market folks who get it out to the sales team. I hear that a lot. You’re the person who helps us communicate with the revenue team and speaks their language, which is a big win in and of itself, right? We’re not being called L and D anymore, but on the flip side of things, we’re not the seat at the table, driving the go-to-market strategy. I do see that starting to change where more companies than ever are hiring VPs of enablement. They want to attach to the operational strategy and the go-to-market strategy and have a plan Where all three of those functions are working together there was this weird trend during COVID where everyone was down-leveling their enablement to save costs, and now I’m starting to see that flip again Where we’re hiring lots of folks that are at a senior level. They want 10 plus, 15 plus years of enablement experience to guide them on “how do I drive that productivity lever?” “How do I be smarter in how we sell in the market?” Because we do have fewer salespeople and we have a bigger market that we have to attack. SS: Now, as we talked about in that first question, people, process, and technology, on the people side, what are some of the key things you prioritize when you’re building a high-performing enablement team? SG: I think the people side really matters in how you build out your enablement team so that you’re embedded enough to understand the business, right? For example, if you have a role that covers everything up to ISRs, right? Inside sales reps, where they’re both covering leads and running the sales process, you need a very different enabler to support that type of practice. And someone that’s maybe in an office with them, as an example, that’s where I think it makes sense when they’re all in the office. Or someone who has experience both in running a lead gen team, also a selling team, so that they can pull those together. As you think of the sales structure, that’s where I think enablement has to mirror. This is why enablement teams change from time to time because it’s normal that sales structures change, and you have to make sure that we’re constantly evolving with them. When those two structures start to look different, like when I see enablement teams aligned by product, right? They’re seen as generalists and they’re less impactful to the overall business. SS: That makes sense. On the process side, how have you partnered with your key stakeholders to identify process gaps and solve those inefficiencies? SG: That’s the question always. Having really strong ops partners and being part of the conversation that you have with your chief revenue officer is important. I think that the way that works best is that ops gives you visibility to everything that they’re showing to the sales leader. If you have a different lens, the sales leader is looking at like, “Where do I have pockets of inefficiency that I could either reduce my head count or change that headcount around?” You’re looking at it as, “How do I have cohorts people that maybe are at that 90% mark, that they would make a lot more money and we would make a lot more money if they were at 100% or 105%.” The way I look at the data that’s being provided to me is always, “How can I think of running cohorts of people?” So for example, I’ve had in the past where I’ve got a discovery coach, someone that I can deploy if there are people in SDR land, AE land, and SC land that need that particular topic area. And by deploying them I’m keeping my business partners. So you’ve got that strategic person that they trust introducing the session with somebody that’s focused on that particular topical area to move the needle as it comes to productivity. I’d expect that a lot of enablement teams are probably going to have someone who’s that AI specialist over time, where their entire job is to match that internal data set with what we are hearing from our enablement tools. And how can we deploy every call we’re looking at across the system? Deploy, if we hear X word, one sheeter, a cheat sheet to the rep in the moment. It’s a talk track that gets deployed to our SDR organization, right? Maybe it’s even a technical validation asset that’s going to our enterprise reps as they run into X integration that they don’t run into all of the time. So I’m starting to see where I almost think SEs are always a secret sauce to a lot of organizations. To me, it’s how we capture what’s in our SE’s minds and deploy it on every call that’s happening, not just the ones that an SE is attached to. SS: Absolutely. And then the last piece of the puzzle: technology. How does an enablement platform help you effectively bring your strategy to life? And in your opinion, what is the strategic value of a unified platform? SG: First and foremost that’s definitely evolved in the past, like five-ish years. Originally, I would say we were looking at just do you have SCORM packaging. It was very traditional instructional design technology that you needed. Now, because we’re further away from what is traditional L&D, I’m seeing a lot of enablers really need their platform to drive, “What are we missing?” We’re running a discovery program, we’re running a negotiation program, and we’re running our new product launch. But what are people looking for in the system, whether that’s on the content side or the learning side that we’re not providing to them today? Or, what are we providing that nobody gives a shit about? And we’re wasting our time and resources? So, it’s an efficiency lever for enablers, which is what I see the platform as. The reason I think that content and learning have to be in the same place, is it’s part of the learner’s journey, right? Humans either want to consume information in written format or video format for the most part, right? And as you take those two things, you have to meet the learner where they are. That’s something we all knew years ago. But now it’s not just meeting them where they are, it’s meeting them where they are at the right time. And so it’s making it a self-service model that you can then look at analytics and drive what you’re putting out there to them in the most efficient way. So if you don’t have what content is someone consuming, what learning is someone consuming, and what are they sharing externally with their customers, you’re missing a piece of the puzzle. Because they might be – another thing I’ve always been able to point out to my heads of revenue – is that we’re consuming and teaching this internally, but our customers are actually looking at this when we send it over to them. So there’s a disconnect between the two things that we really need to solve for. SS: Now, obviously, if you make the investment in technology, you want to ensure that your reps are taking full advantage of it. What are some of your best practices for driving adoption of your enablement solution with your reps? SG: Ariel Lashaza, who’s someone who’s worked for me at a couple of organizations, did this extremely effectively. We brought it to the level of reps. Think of TikTok, we called it “What You Want to Know Wednesdays”. And it was a piece of information that they could try every single week. I think the way that these things work is it’s the curve, right? That we always know. You got your earlier adopters, you got everybody in the middle, and then you have your late adopters. There are certain folks on every team, you know they’re not going to do it until we tell them they have to. Then there are people that you know are going to chomp at the bit to get anything. And then there are people in the middle that their manager has to tell them to do it. So I think it’s identifying who your early adopters are, especially as you do those fun, “What You Want to Know Wednesdays”. And let them drive the topics, because then they’re excited about it, they tell that middle crew, get them more excited about it, and ask them to go to shared and team meetings, and that’s how we’ve rolled things. Having a little bit more structure around it, I think that it’s important that, one: it’s leadership-led. So you have a preview from the frontline manager level, up. What to be expected, how are we going to support you, what do we need from you? Very simply. Then as you actually roll things out to the team, it’s embedding that into your monthly learning. However, you’re already communicating to the team and then it’s taking those early adopters, and showing them the impact on revenue, which is why I think that Highspot’s report of tracking opportunities related to Pitches is like money. Because you can go back and you can say, “Hey, you sent out 18 Pitches. Those were viewed 300 times, and you actually closed every single one of those deals. So, your rate of closure when you use the pitching functionality is 100%. What was your close rate before you did that? Maybe it’s 40%.” And so immediately to them, they see the ROI. I just think we have to, as enablers, always be selling. And the way that we sell is by using that data and analytics for our revenue team. SS: So on the topic of data I think that you have always been someone who has helped those around you succeed by really being inquisitive and leveraging that data, as you mentioned, to make business decisions. What are some of your best practices for, to the point you just made, measuring the business impact of enablement? SG: Best practice, one is, what’s your CRO measuring? if they’re being measured by ARR attainment and NRR, right? You have to know what those numbers are first and foremost. Secondly, it’s starting to break down, “Hey, CRO, if I did X, do you think that would move the needle on what you’re being measured against?” If that’s the case, let’s put this in place, and let’s put a stop in the sand where we are today and measure it again in 6 months. So I think that alignment up front is really important and making sure you’re not just tracking to time-to-ramp if they don’t care about it, right? Super important you’re aligned. Thing two is buddying up with the ops team and saying, “I know you’re measuring this. I want to measure how we can impact this as a team. You’re probably making some operational changes. Maybe it’s quota, maybe it’s territory alignment, et cetera. Let me build off of that. I’ll actually help you communicate that to the reps that it’s happening to. And, once we communicate that, instead of it being a 10-minute call, let’s make it a 30-minute call. Where on the last twenty minutes, let’s talk about how they can attain that number in a better and more succinct way. And maybe what Sales Plays are we running to support them in hitting those new targets that we’ve provided?” So it shows us the unity between ops and enablement, which is such an important part. And then it’s coming back. I’ve always said, at a minimum, quarterly reviews. What was the enablement impact? And that’s where I do think we should push our vendors to support us in that, right? They should be providing us with Scorecards. They should be providing us insight into what’s going on in our system from their CSMs. And I encourage all of my enablement peers to be requesting that. It is well within our rights to push that we need that level of data to run our businesses. Think of a CMO or CRO. They’ve been holding vendors accountable for providing them that since the beginning of time. And it’s now our time to do that so that we can have those executive-level conversations. And we’re not just being like, “Oh, we launched the learning path.” Or, “We launched the huddle.” That’s fantastic, but we’ve all moved past that. It’s not about you putting it out there. It’s about, did it hit the mark actively, proactively sharing what did or did not, and what you’re going to do next. SS: To that executive leader point, given kind of your wealth of experience and enablement, how have you gained buy-in and support for your enablement strategy with your executive leaders? SG: It’s always the fun thing that when you get a new CRO or a new C-level, right? How do you set their expectations of enablement, right? Honestly, there’s still a lot of different definitions out there of what a good enablement team looks like. I think it’s really important that up front you understand: what are their priorities and how are they being measured? I didn’t say, what are they measuring? How are they being measured? Every CRO has something that the board is asking them for. That’s just part of the game, and part of being in sales. And it’s usually different than what quota is being assigned out to their team, et cetera. I think the second piece of that is then saying, “How can I communicate this with you? What frequency would you like to see updates from me and in what format?” If they don’t have a proposed format, something I learned from an early CRO of mine is CAB: Conviction, Action, Benefit. Having three columns on the screen: what are we convicted to do together this quarter? Again, that’s a together statement. What actions am I taking, right? Or do I need you to take to support that? And I always suggest there’s a two-way street there. And then what’s the benefit? What is the thing that we are measuring? And if we come back and it happens, we know that we are successful in this. Super simple, but I would go on whatever format your particular leader is looking for. They might have a different version of that. Or propose, hey, if I did this, would that work for you? I don’t think it should ever be more than one page. It shouldn’t be a ton of charts and a ton of things. Solely because that’s your job as enablement. Our job to them is that we’re measuring those big projects together. SS: I love that advice and that acronym. Last question for you, Sarah, where do you see the future of enablement going over the next few years? And what are you most excited about? SG: That's a good one. We’ve come a long way in a very short amount of time as a profession. I definitely think that AI is a big part of our future. It’s a big part of everybody’s. It’s a big part of humans. I think as enablers the thing we do that is so critical to every business is we understand how humans work, think, and do. So it’s using AI to be more human and to help our team with those productivity levers.So think of right now we run a pitch contest and you use your top three reps, you have them pre-recorded as an example and then you launch it to everybody and you have a grading criteria in the system. Maybe you have something like a Copilot of one of these SORMs that’s running and giving a little bit of insight. But, in the future, think about if that could simulate a customer in their territory, in their patch that they’re trying to sell to today. It could be somebody that looks and feels like their particular segment. That becomes even more impactful as we continue to grow. And I do see that’s where we’re going.I want to level set to all CROs out there, that’s not where our technology is today. But, it is absolutely something that’s within reach in the next couple of months. I also see a lot more happening around the traditional Salesforce where we used to serve up, “Hey, have you thought about sending this white paper type of thing?” Integrated more into the call intelligence world where you’re in a call, and it says, “Hey, you just hit a roadblock right in that conversation. Consider this objection handling technique,” or, “Consider saying you want to bring your SE in to do this technical validation based on the integration they just asked for.” There’s a lot more of that to come as we continue down the path. What am I most excited about? It is AI. I say that with a little hesitancy in my voice. I think that it’s a way for enablement to scale without having to have humans and to be able to spend time doing the things that we love doing, which is talking to reps gathering feedback, and being part of the collaboration. That is revenue. I think we spend a lot of time right now in some cases behind the scenes in our LMS and CMS because they’re not totally optimized. So if we are in this AI lens where our CMS is sending us an email every morning of what’s good, bad, and not happening in the system, and we’re not going through hundreds of thousands of pieces of data, that gives you so much more time to be in front of the team and with the team. Right now, I see enablers having to choose one or the other that they’re really good at. And the people who are behind the scenes sometimes aren’t part of the executive meetings because they don’t have enough face time, but they’re really good at the right programs, et cetera, to get out. And then some people who have too much face time, and not enough behind the scenes are dinged that they’re not analytical enough, or they’re not using AI to drive their business. So it’s a push-pull today. I think that our vendors are really catching up with supporting enablement. Just like Salesforce has always supported a revenue organization in design that will get us to where we need to go in the future. So, I am excited about AI and because I think it will give us more time to be the human elements in front of our revenue organization. SS: I love that. And I know we are very excited about Highspot Copilot as well here and all the AI innovation we have coming. And we’re going to actually be announcing some of that at our Spring Launch Discover Webcast shortly. So thank you, Sarah, so much for taking the time to chat with us. I really appreciate it. SG: Absolutely. It was my pleasure. SS: To our audience, thank you for listening to this episode of the Win Win Podcast. Be sure to tune in next time for more insights on how you can maximize enablement success with Highspot.
Professor Deepinder Dhaliwal MD is a cornea, cataract, and refractive surgeon at the University of Pittsburgh where she has pioneered so much in our field. You will recognize her as a panelist at our annual Best of CataractCoach session at the ASCRS meeting. The next one is coming very soon on Saturday, April 6, 2024 at 8:00am at the Boston convention center. She is also a licensed acupuncture professional and she has used LASIK to help children with anisometropic amblyopia. Dr Dhaliwal is the president elect of the ISRS: International Society of Refractive Surgery. I encourage you to check out the ISRS at ISRS.online and register as a member like I did. We feature a new podcast every week on Sundays and they are uploaded to all major podcast services (click links here: Apple, Google, Spotify) for enjoying as you drive to work or exercise. I am really excited to share these amazing podcasts with you. The full video of the podcast will also be here on CataractCoach as well as on our YouTube channel. Our podcast is now sponsored by Harrow, Inc and we are thankful for their support.
O transtorno depressivo perinatal é uma das condições psiquiátricas mais prevalentes no puerpério, podendo estar presente em até 15% das puérperas. Quando não tratado, pode trazer consequências negativas não só para a portadora, mas também para seus filhos e parceiros. Nesse sentido, diversos estudos apontam que o uso de antidepressivos no período puerperal podem ser úteis para o alcance da remissão sintomática. Porém, há pouca evidência acerca das consequências a longo prazo para mães e filhos do uso de antidepressivos no puerpério. Esse episódio do PQU Podcast irá apresentar os resultados de uma coorte realizada na Noruega, sobre as repercussões a curto, médio e longo prazo do uso de inibidores seletivos da recaptação de serotonina no período puerperal em mães e filhos. Não perca!
Shawnna Sumaoang: Hi, and welcome to the Sales Enablement PRO Podcast. I'm Shawnna Sumaoang. Sales enablement is a constantly evolving space, and we're here to help professionals stay up to date on the latest trends and best practices so that they can be more effective in their jobs. Today, I’m excited to have Bana Kawar from AWS join us. Bana, I would love for you to introduce yourself, your role, and your organization to our audience. Bana Kawar: Thank you very much, Shawnna. Hello, everyone who’s listening to this podcast, and a shout out to all enablement professionals all over the world. My name is Bana, and oftentimes people think of a banana without an A to remember my name here in the UK. I look after the UK public sector enablement here at Amazon Web Services. I have been with the company for seven and a half years now in different functions and different countries. I currently spearhead the enablement function here in the public sector and help the organization grow to what it is today and reach our organizational outcomes. Apart from work, I have a huge passion for ID&E. I try to call it IDEA where possible, where A stands for action. It’s inclusion, diversity, equity, and action. I’ve co-founded the EMEA chapter, along with two other Amazonians to reach where we are today. I love to mentor and empower different leaders on different topics. I’m a career coach as well. That’s a bit about me, Shawnna. I look forward to our discussion today. SS: Absolutely. Likewise. I look forward to digging into that with you as well. Now to get started, for our audience and sales enablement, which I’m sure they can all relate to, you are extremely passionate about driving excellence in sales performance. I’d love to start there. From your perspective, how does enablement strategically influence sales performance? BK: A lot of organizations, including ours, are focused on growth. I truly believe as an enablement function, we have key responsibility, and also a pleasure to be part of that journey as well. If I look at my current role for the UK, we’re trying to hit 1 billion business this year, and enablement is helping to drive insights that would help sellers in different orgs and different roles from ISRs, account managers, business developers, partner teams, etc, to drive those valuable conversations with their customers and help them on their key missions. I really think when enablement is aligned to the business and also aligned to revenue ops or business ops, depending on how organizations define it, you can influence strategically as well. The last piece that comes to mind is how enablement can play a role in reducing time to market. When you’re enabling teams to be more adaptive versus reactive, you are already helping in reducing that time to launch and ramp up faster. Finally, the downstream impact of this is having more time and more focused resources to drive high-velocity decisions and build better products from there. In a nutshell, that’s three different ways, how I see enablement playing a role in the business strategy. SS: I couldn’t agree more. One of your areas of expertise is really around building everboarding programs that continue to align with those organizational goals. I’d love to hear more about your everboarding programs. What are some of your best practices for building everboarding programs that drive sales performance? In other words, what does good everboarding look like? BK: I really believe in the power of everboarding because it also shows that you’re a learn it all organization versus a know it all organization. I truly believe in any function, learning does not stop when you hit that 90-day mark that oftentimes is the industry standard for onboarding. That continuous learning journey is ongoing in so many different ways and functions. To build a good everboarding program I think you could look at it and dissect it into different ways. The first one is the discovery piece. Truly understanding what are some of the problem statements that you’re solving for. In today’s world, we have a tsunami of information, and people are overwhelmed with how much they should get up to speed on. An everboarding program should sometimes also be a refresher. We have recharge programs here and I really think some of those key skills that a lot of people learn in their early selling journey are needed very much in everboarding programs. Examples that come to mind include prospecting, objection handling, mission understanding, and negotiation skills. Those are key to any seller in any role, and sometimes those refreshers can be absolutely valuable to drive those customer conversations and reduce time to ramp. The second piece that comes to mind is making sure you’re always up to date with what’s happening in the market. That brings me to the second point product knowledge and market understanding. A lot of SaaS companies have so many solutions and products that they’re trying to bring to market and one way to really do that is certifying reps and making sure that they’re actually going through the knowledge check and getting certified on a specific use case. I’m a firm believer in having certifications on any new product releases and also on new market trend understanding because that also shows your customer that you are meeting them where they need you to be as well. Last but not least, an everboarding program, or as a matter of fact, any enablement program, should align with business objectives. That includes successful OKRs to measure their success and iterate from there. If I zoom out, those are the three key things I would look at from an everboarding perspective and build from there. SS: Yeah, absolutely. Bana, what would you say the importance is of having everboarding programs rather than just onboarding programs? BK: One of the things that are important in any organization is staying agile and moving to a learning journey continuously. As I mentioned before, the learn it versus know it all, because of the pace and the agility that the market is moving towards. I think having everboarding programs is not nice to have, it is an absolute must-have, in my humble opinion, to be successful and have your position in the market lead and truly help to solve one customer problem at a time. A beautiful way to do that is to help grow the business and grow your own knowledge as well, whether you’re a seller, a partner, or even someone in enablement because I believe that you should enable the enablement org as well. You do that through creating everboarding programs to maintain that high performance, and retention, and also hit your OKRs at the end of the day. SS: Absolutely. Everboarding is increasingly important, especially as you try to make sure that you’re maximizing the productivity of your in-sync sales team. Now your everboarding programs have reached upwards of 400 employees globally. What tips do you have maybe for our audience who are also trying to create enablement programs on a global scale? BK: I really believe that scale happens a bit easier than what we expect when we’re solving the right problem. What I’ve noticed at Amazon, as an example, is that the problem statement is often shared across different functions and across different geographies versus only the actual customers that you’re looking after. That brings me to the first part of problem-solving, which is ensuring that the discovery phase is done really well. What I mean by that is what problem to solve first, and from there, you move into the solution. Enablement could and should, in my opinion, spend a bit of time on the discovery phase understanding first, is that problem statement shared across? Is that a global or regional problem only? You do that by asking the same set of questions as an example across the board to understand who’s the customer at this point. What are some of the key missions that they’re solving for? Who are the customer profiles that they look at at the moment? What is their impact on the industry? What vertical do they sit in? What are some of the KPIs that they’re assessed against? More often than not, sellers have similar metrics, but different numbers that they need to hit. That’s one way how to approach it when you’re trying to scale as well before you move into build mode. The second part that brings me to the ID&E is any perspective because I’m a huge believer in getting different perspectives and getting content reviews and content even being created by different people across the company and having that cross-functional and cross-pollination happening to build the best products you have so they get that impact that they need. The third piece, if I’m thinking of the power of scaling, is what happens afterward. How do you make sure that you tie in your input with your output through what we call a mechanism? That’s when you build through iterations and have a phased approach and a very clear feedback process built in and weaved done and you hold yourself and your stakeholders accountable to make sure that whatever you’re building is insisting on the highest standards and also really impacting the end customer and helping them move faster towards their mission. If you put those 3 things together, that’s when a beautiful Venn diagram is shaped and you see the impact of what we think of as the power of scaling. SS: I think that is amazing. You have done a phenomenal job building these programs at scale. Now, as you mentioned in your introduction, you are also a co-founder of the EMEA inclusion, diversity, and equity chapter. I think you also had action at the end of that at AWS. How do you incorporate ID&E best practices into your enablement programs, and what would you say is the impact of doing so? BK: I love that question, Shawnna. Thank you for addressing it, especially in today’s world where ID&E is really helping a lot of customers understand what is important and how to create that diverse product line, and best programs, and build better. To achieve this, in my opinion, the first thing you could also look at is how you could address some of the biases we all have. Everyone has biases, including myself, and those are just the mental shortcuts you have in your programs and the content you produce. The first thing that I try to address when I have a new program is to build an advisory board and have different people with different experiences and backgrounds to help build this up. You can cover it from different angles. If we focus on and double click on the enablement programs, you could also have people from different functions that you look after, like sales ops or biz ops, who should be part of that. The other pieces, having different and equal representation from your customers, for example, different geographies that you cover, different countries, different verticals, different personas, et cetera, bring that experience that you actually need to build that best product. If anyone wants a practitioner tip, one of the things that really helped me uncover some of those biases and understand them better is the Harvard Project Implicit Test to uncover some of those biases and address them. The second piece you could do is also have diverse speakers when you build those programs. Building the content and having the content reviews and the advisory board is one thing and then you move into the build phase. That’s where diverse speakers can help refine their program, and bring that message to different folks. That can already embed representation within having different levels of seniority and creating opportunities for underrepresented groups throughout the process versus just calling it global and having speakers from one country, as an example. The last topic, which is a dear topic to my heart and something that I’m trying now to learn more about is neurodivergence. A lot of products that we create sometimes have technical jargon and not the simplest visual aids that people should understand. We can take a step back and think from different perspectives and throughout that advisory board that you build, you can understand the different needs. For example, how do you build for people with visual impairment? How do you build for people with dyslexia? Understanding your neurodivergent customers in different sectors can be overwhelming at the beginning, but it is an absolute must to have that inclusive and best product. Those are the three key ingredients that come into play for enablement. The key ingredient from all of that is woven in through communication. When you have communication flowing, bottom-up, top-down, and sideways, you make sure that you’re also using that inclusive language and embodying inclusion throughout to make adjustments where you need and stay humble. As I mentioned at the beginning, the A part comes into play. It’s not enough to say we care about ID&E, but not embed ID&E throughout the content and the programs that we build. Every seller deserves an equal chance to have the best impact they could have on their customer, and it starts with the enablement team to do that. That’s my two cents on ID&E and enablement, Shawnna. SS: I love that. The last question for you, recently I saw a post from you on LinkedIn about how generative AI is really transforming businesses, including some of the ways that it influences the ID&E space. How do you think I will influence how you create and deliver enablement programs in the next year and maybe even beyond? BK: I really believe that in today’s world, we have far more accessibility on the topic of AI than ever before, thanks to generative AI. AI has been around for a while now, and whether we thought about it or not, it has shaped how we learn in different ways. Whether we think of it in person-wise self-learning and customized versions of learning, into chatbots, which is quite prominent in today’s world, having virtual assistance, simulated learning has been around for quite some time in today’s world. What I believe is important is how we’re using it and the ethical framework around it because it’s here to stay. I really think those tools can help us if used right, and if it’s a stress test and the accuracy is measured that it can help us be more productive. It also can help us reduce our time to impact our time to market. When we have that embedded in our processes, for example in our text summarizing that we could leverage, for example, generative AI for it can already have an impact on our sellers, and that will have the dominant effect on the end customer that we are already helping them on their mission. I do believe that AI and generative AI can absolutely personalize learning experiences and provide real-time performance insights, let alone automate content delivery. I really think it’s important to develop those mechanisms and I would also stress the ethical framework around it to build for impact and build for performance. I’d like to tie that with what I mentioned in one of my answers earlier today about having a more agile and adaptive selling team. You do that when you use the resources that are available to you to help your learners grow in their own journey and remain obsessed with the right technology at the right time and the right way. SS: I think that is phenomenal. Thank you so much for joining us today. I really appreciate your insight. BK: Thank you for having me, really enjoyed listening and having that discussion with you, Shawnna. SS: To our audience, thanks for listening. For more insights, tips, and expertise from sales enablement leaders, visit salesenablement.pro. If there's something you'd like to share or a topic you'd like to learn more about, please let us know. We'd love to hear from you.
Network Security with KevTechify on the Cisco Certified Network Associate (CCNA)
In this episode we are going to look at Intrusion Prevention System (IPS) on Cisco Integrated Services Routers (ISRs).We will be discussing Intrusion Prevention System (IPS) Components, Cisco IOS IPS, Snort IPS, Snort Operation, Snort Features, and Snort System Requirements.Thank you so much for listening to this episode of my series on Network Security.Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.com-------------------------------------------------------Network Security v1Episode 11 - IPS TechnologiesPart C - IPS on Cisco ISRsPodcast Number: 45-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment
In this episode, Jason Schafer, PharmD, MPH, explores the data for long-acting injectable pre-exposure prophylaxis (PrEP).Listen as he gives his perspectives on the:PrEP pipeline and the importance of having options for patients with differing preferencesFDA approval of long-acting cabotegravir for PrEPHPTN 083 and 084 studies of long-acting cabotegravir for PrEP compared with oral emtricitabine/tenofovir disoproxil fumarateSafety data on long-acting cabotegravir for PrEPCDC recommendations on the management of injection-site reactions with cabotegravir for PrEPPresenter:Jason Schafer, PharmD, MPHProfessor and Vice ChairJefferson College of PharmacyThomas Jefferson UniversityPhiladelphia, Pennsylvania Follow along with the slides at:https://bit.ly/3tbvEPKSee the entire program at: https://bit.ly/3q2DlGd
Today, we're talking with Joe Venuti VP Inside Sales at Sendoso Joe leads a team of approx 50 ISRs over in Scottsdale AZ and he gave me time out his busy schedule to share how Direct Mail is getting results in a world where so many reps can only think digital when it comes to prospecting and prospect engagement. In this week's episode What does Sendoso do? What is Joe's role as VP of InsideSales? What is the makeup of his sales org? What role does Inside Sales play in the sales org? How does he find and nurture talent? (SDR / BDR / XDR?) What does it mean to Joe to be a coach to his team? Check it out now
Dr. Renato Ambrósio, Jr., MD, PhD é oftalmologista especialista em Segmento Anterior, altamente respeitado no Brasil, hoje lidera a sociedade global de cirurgiões refrativos e de catarata (ISRS). Dr. Renato possui passagens pelo Instituto de Oftalmologia Tadeu Cvintal – IOTC, USP, University of Washington com Dr Steven Wilson. Vem de uma super família de oftalmologistas, conseguiu encontrar seu espaço e ser uma referencia nacional e internacional dentro da especialidade. Em 2018 esteve entre os 100 mais influentes do mundo, da Revista inglesa “The Ophthalmologist”. • Para quem quiser assistir ao vídeo desse episódio: https://youtu.be/GTAZIJaO87s • Siga nosso instagram para atualizações semanais: https://www.instagram.com/fala.doutores/
Apache Kafka® 2.7 is here! Here are the key Kafka Improvement Proposals (KIPs) and updates in this release, presented by Tim Berglund. KIP-497 adds a new inter-broker API to alter in-sync replicas (ISRs). Every partition leader maintains the ISR list or the list of ISRs. KIP-497 is also related to the removal of ZooKeeper.KIP-599 has to do with throttling the rate of creating topics, deleting topics, and creating partitions. This KIP will add a new feature called the controller mutation rate.KIP-612 adds the ability to limit the connection creation rate on brokers, while KIP-651 supports the PEM format for SSL certificates and private keys.The release of Kafka 2.7 furthermore includes end-to-end latency metrics and sliding windows.Find out what’s new with the Kafka broker, producer, and consumer, and what’s new with Kafka Streams in today’s episode of Streaming Audio!EPISODE LINKSRead about what’s new in Apache Kafka 2.7Check out the Apache Kafka 2.7 release notesWatch the video version of this podcastJoin the Confluent Community SlackLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)
A associação entre medicamentos antidepressivos ISRS e tramadol pode resultar em graves desfechos à saúde dos pacientes. Neste episódio, o Prof. Lincoln Cardoso explica essa importante interação medicamentosa e as condutas de manejo no cuidado farmacêutico. Quer saber? #saberfarmácia #Farmacosophia #proflincolncardoso
Mydriase paralytique (SEP, syphilis, AVC du tronc, hémorragies intracérébrales...), glaucome aigue par fermeture de l'angle, botulisme, diphtérie, rupture du sphincter irien, traumatismes de l'iris, alcool, crack, cocaïne, champions (amanites, psilocybes), antidépresseur ISRS.
Los antidepresivos son fármacos ampliamente usados para tratar problemas relacionados con el estado de ánimo. Abordamos su nacimiento, uso y efectos (positivos y negativos). IMAO, tricíclicos e ISRS son diferentes tipos de antidepresivos con diferentes efectos. ¿Qué ocurre al dejarlos de tomar? ¿Qué beneficios podemos esperar? ¿Cómo funcionan? Son algunas de las preguntas que abordamos en este podcast. Si quieres ampliar te recomiendo el libro "Hablando claro" de J. Moncrieff, Editorial Herder. Sobre el mito de los antidepresivos tienes el podcast #105. Más información sobre la clínica de Cemyc en Cemyc.com
Looking at Lumina Desktop 2.0, 2 months of KPTI development in SmartOS, OpenBSD email service, an interview with Ryan Zezeski, NomadBSD released, and John Carmack's programming retreat with OpenBSD. This episode was brought to you by Headlines Looking at Lumina Desktop 2.0 (https://www.trueos.org/blog/looking-lumina-desktop-2-0/) A few weeks ago I sat down with Lead Developer Ken Moore of the TrueOS Project to get answers to some of the most frequently asked questions about Lumina Desktop from the open source community. Here is what he said on Lumina Desktop 2.0. Do you have a question for Ken and the rest of the team over at the TrueOS Project? Make sure to read the interview and comment below. We are glad to answer your questions! Ken: Lumina Desktop 2.0 is a significant overhaul compared to Lumina 1.x. Almost every single subsystem of the desktop has been streamlined, resulting in a nearly-total conversion in many important areas. With Lumina Desktop 2.0 we will finally achieve our long-term goal of turning Lumina into a complete, end-to-end management system for the graphical session and removing all the current runtime dependencies from Lumina 1.x (Fluxbox, xscreensaver, compton/xcompmgr). The functionality from those utilities is now provided by Lumina Desktop itself. Going along with the session management changes, we have compressed the entire desktop into a single, multi-threaded binary. This means that if any rogue script or tool starts trying to muck about with the memory used by the desktop (probably even more relevant now than when we started working on this), the entire desktop session will close/crash rather than allowing targeted application crashes to bypass the session security mechanisms. By the same token, this also prevents “man-in-the-middle” type of attacks because the desktop does not use any sort of external messaging system to communicate (looking at you dbus). This also gives a large performance boost to Lumina Desktop The entire system for how a user's settings get saved and loaded has been completely redone, making it a “layered” settings system which allows the default settings (Lumina) to get transparently replaced by system settings (OS/Distributor/SysAdmin) which can get replaced by individual user settings. This results in the actual changes in the user setting files to be kept to a minimum and allows for a smooth transition between updates to the OS or Desktop. This also provides the ability to “restrict” a user's desktop session (based on a system config file) to the default system settings and read-only user sessions for certain business applications. The entire graphical interface has been written in QML in order to fully-utilize hardware-based GPU acceleration with OpenGL while the backend logic and management systems are still written entirely in C++. This results in blazing fast performance on the backend systems (myriad multi-threaded C++ objects) as well as a smooth and responsive graphical interface with all the bells and whistles (drag and drop, compositing, shading, etc). Q: Are there future plans to implement something like Lumina in a MAC Jail? While I have never tried out Lumina in a MAC jail, I do not see anything on that page which should stop it from running in one right now. Lumina is already designed to be run as an unpriviledged user and is very smart about probing the system to find out what is/not available before showing anything to the user. The only thing that comes to mind is that you might need to open up some other system devices so that X11 itself can draw to the display (graphical environment setup is a bit different than CLI environment). Q: I look forward to these changes. I know the last time I used it when I would scroll I would get flashes like the refresh rate was not high enough. It will be nice to have a fast system as well as I know with the more changes Linux is becoming slower. Not once it has loaded but in the loading process. I will do another download when these changes come out and install again and maybe stay this time. If I recall correctly, one of the very first versions of Lumina (pre-1.0) would occasionally flicker. If that is still happening, you might want to verify that you are using the proper video driver for your hardware and/or enable the compositor within the Lumina settings. Q: Why was enlightenment project not considered for TrueOS? It is BSD licensed and is written in C. This was a common question about 4(?) years ago with the first release of the Lumina desktop and it basically boiled down to long-term support and reliability of the underlying toolkit. Some of the things we had to consider were: cross-platform/cross-architecture support, dependency reliability and support framework (Qt5 > EFL), and runtime requirements and dependency tracking (Qt5 is lighter than the EFL). That plus the fact that the EFL specifically states that it is linux-focused and the BSD's are just an afterthought (especially at the time we were doing the evaluation). Q: I have two questions. 1) The default layout of Unity(menu bar with actual menu entries on top and icon dock on the side) is one of the few things I liked about my first voyage into non-Windows systems, and have been missing since moving on to other distros(and now also other non-Linux systems). However in 1.4.0 screenshots on Lumina's site, the OSX-like layout has the menu attached to the window. Will 2.0 be able to have the menus on the bar? 2) Is there any timeline for a public release, or are you taking a “when it's ready” approach? In Lumina you can already put panels on the left/right side of the screen and give you something like the layout of the Unity desktop. The embedded menu system is not available in Lumina because that is not a specification supported by X11 and the window manager standards at the present time. The way that functionality is currently run on Linux is a hacky-bypass of the display system which only really works with the GTK3 and Qt5 toolkits, resulting in very odd overall desktop behavior in mixed environments where some apps use other graphical toolkits. We are targetting the 18.06 STABLE release of TrueOS for Lumina 2, but that is just a guideline and if necessary we will push back the release date to allow for additional testing/fixing as needed. A long two months (https://blog.cooperi.net/a-long-two-months) IllumOS/SmartOS developer Alex Wilson describes the journey of developing KPTI for IllumOS > On Monday (January 1st) I had the day off work for New Year's day, as is usual in most of the western world, so I slept in late. Lou and her friend decided to go to the wax museum and see several tourist attractions around SF, and I decided to pass the day at home reading. That afternoon, work chat started talking about a Tumblr post by pythonsweetness about an Intel hardware security bug. At the time I definitely did not suspect that this was going to occupy most of my working life for the next (almost) two months. Like many people who work on system security, I had read Anders Fogh's post about a "Negative Result" in speculative execution research in July of 2017. At the time I thought it was an interesting writeup and I remember being glad that researchers were looking into this area. I sent the post to Bryan and asked him about his thoughts on it at the time, to which he replied saying that "it would be shocking if they left a way to directly leak out memory in the speculative execution". None of us seriously thought that there would be low-hanging fruit down that research path, but we also felt it was important that there was someone doing work in the area who was committed to public disclosure. At first, after reading the blog post on Monday, we thought (or hoped) that the bug might "just" be a KASLR bypass and wouldn't require a lot of urgency. We tried to reach out to Intel at work to get more information but were met with silence. (We wouldn't hear back from them until after the disclosure was already made public.) The speculation on Tuesday intensified, until finally on Wednesday morning I arrived at the office to find links to late Tuesday night tweets revealing exploits that allowed arbitrary kernel memory reads. Wednesday was not a happy day. Intel finally responded to our emails -- after they had already initiated public disclosure. We all spent a lot of time reading. An arbitrary kernel memory read (an info leak) is not that uncommon as far as bugs go, but for the most part they tend to be fairly easy to fix. The thing that makes the Meltdown and Spectre bugs particularly notable is that in order to mitigate them, a large amount of change is required in very deep low-level parts of the kernel. The kind of deep parts of the kernel where there are 20-year old errata workarounds that were single-line changes that you have to be very careful to not accidentally undo; the kind of parts where, as they say, mortals fear to tread. On Friday we saw the patches Matthew Dillon put together for DragonFlyBSD for the first time. These were the first patches for KPTI that were very straightforward to read and understand, and applied to a BSD-derived kernel that was similar to those I'm accustomed to working on. To mitigate Meltdown (and partially one of the Spectre variants), you have to make sure that speculative execution cannot reach any sensitive data from a user context. This basically means that the pages the kernel uses for anything potentially sensitive have to be unmapped when we are running user code. Traditionally, CPUs that were built to run a multi-user, UNIX-like OS did this by default (SPARC is an example of such a CPU which has completely separate address spaces for the kernel and userland). However, x86 descends from a single-address-space microcontroller that has grown up avoiding backwards-incompatible changes, and has never really introduced a clean notion of multiple address spaces (segmentation is the closest feature really, and it was thrown out for 64-bit AMD64). Instead, operating systems for x86 have generally wound up (at least in the post-AMD64 era) with flat address space models where the kernel text and data is always present in the page table no matter whether you're in user or kernel mode. The kernel mappings simply have the "supervisor" bit set on them so that user code can't directly access them. The mitigation is basically to stop doing this: to stop mapping the kernel text, data and other memory into the page table while we're running in userland. Unfortunately, the x86 design does not make this easy. In order to be able to take interrupts or traps, the CPU has to have a number of structures mapped in the current page table at all times. There is also no ability to tell an x86 CPU that you want it to switch page tables when an interrupt occurs. So, the code that we jump to when we take an interrupt, as well as space for a stack to push context onto have to be available in both page tables. And finally, of course, we need to be able to figure out somehow what the other page table we should switch to is when we enter the kernel. When we looked at the patches for Linux (and also the DragonFlyBSD patches at the time) on Friday and started asking questions, it became pretty evident that the initial work done by both was done under time constraints. Both had left the full kernel text mapped in both page tables, and the Linux trampoline design seemed over-complex. I started talking over some ideas with Robert Mustacchi about ways to fix these and who we should talk to, and reached out to some of my old workmates from the University of Queensland who were involved with OpenBSD. It seemed to me that the OpenBSD developers would care about these issues even more than we did, and would want to work out how to do the mitigation right. I ended up sending an email to Philip Guenther on Friday afternoon, and on Saturday morning I drove an hour or so to meet up with him for coffee to talk page tables and interrupt trampolines. We wound up spending a good 6 hours at the coffee shop, and I came back with several pages of notes and a half-decent idea of the shape of the work to come. One detail we missed that day was the interaction of per-CPU structures with per-process page tables. Much of the interrupt trampoline work is most easily done by using per-CPU structures in memory (and you definitely want a per-CPU stack!). If you combine that with per-process page tables, however, you have a problem: if you leave all the per-CPU areas mapped in all the processes, you will leak information (via Meltdown) about the state of one process to a different one when taking interrupts. In particular, you will leak things like %rip, which ruins all the work being done with PIE and ASLR pretty quickly. So, there are two options: you can either allocate the per-CPU structures per-process (so you end up with $NCPUS * $NPROCS of them); or you can make the page tables per-CPU. OpenBSD, like Linux and the other implementations so far, decided to go down the road of per-CPU per-process pages to solve this issue. For illumos, we took the other route. In illumos, it turned out that we already had per-CPU page tables. Robert and I re-discovered this on the Sunday of that week. We use them for 32-bit processes due to having full P>V PAE support in our kernel (which is, as it turns out, relatively uncommon amongst open-source OS). The logic to deal with creating and managing them and updating them was all already written, and after reading the code we concluded we could basically make a few small changes and re-use all of it. So we did. By the end of that second week, we had a prototype that could get to userland. But, when working on this kind of kernel change we have a rule of thumb we use: after the first 70% of the patch is done and we can boot again, now it's time for the second 70%. In fact it turned out to be more like the second 200% for us -- a tedious long tail of bugs to solve that ended up necessitating some changes in the design as well. At first we borrowed the method that Matt Dillon used for DragonFlyBSD, by putting the temporary "stack" space and state data for the interrupt trampolines into an extra page tacked onto the end of *%gs (in illumos the structure that lives there is the cpu_t). If you read the existing logic in interrupt handlers for dealing with %gs though, you will quickly notice that the corner cases start to build up. There are a bunch of situations where the kernel temporarily alters %gs, and some of the ways to mess it up have security consequences that end up being worse than the bug we're trying to fix. As it turns out, there are no less than 3 different ways that ISRs use to try to get to having the right cpu_t in %gs on illumos, as it turns out, and they are all subtly different. Trying to tell which you should use when requires a bunch of test logic that in turn requires branches and changes to the CPU state, which is difficult to do in a trampoline where you're trying to avoid altering that state as much as possible until you've got the real stack online to push things into. I kept in touch with Philip Guenther and Mike Larkin from the OpenBSD project throughout the weeks that followed. In one of the discussions we had, we talked about the NMI/MCE handlers and the fact that their handling currently on OpenBSD neglected some nasty corner-cases around interrupting an existing trap handler. A big part of the solution to those issues was to use a feature called IST, which allows you to unconditionally change stacks when you take an interrupt. Traditionally, x86 only changes the stack pointer (%rsp on AMD64) while taking an interrupt when there is a privilege level change. If you take an interrupt while already in the kernel, the CPU does not change the stack pointer, and simply pushes the interrupt stack frame onto the stack you're already using. IST makes the change of stack pointer unconditional. If used unwisely, this is a bad idea: if you stay on that stack and turn interrupts back on, you could take another interrupt and clobber the frame you're already in. However, in it I saw a possible way to simplify the KPTI trampoline logic and avoid having to deal with %gs. A few weeks into the project, John Levon joined us at work. He had previously worked on a bunch of Xen-related stuff as well as other parts of the kernel very close to where we were, so he quickly got up to speed with the KPTI work as well. He and I drafted out a "crazy idea" on the whiteboard one afternoon where we would use IST for all interrupts on the system, and put the "stack" they used in the KPTI page on the end of the cpu_t. Then, they could easily use stack-relative addresses to get the page table to change to, then pivot their stack to the real kernel stack memory, and throw away (almost) all the conditional logic. A few days later, we had convinced each other that this was the way to go. Two of the most annoying x86 issues we had to work around were related to the SYSENTER instruction. This instruction is used to make "fast" system calls in 32-bit userland. It has a couple of unfortunate properties: firstly, it doesn't save or restore RFLAGS, so the kernel code has to take care of this (and be very careful not to clobber any of it before saving or after restoring it). Secondly, if you execute SYSENTER with the TF ("trap"/single-step flag) set by a debugger, the resulting debug trap's frame points at kernel code instead of the user code where it actually happened. The first one requires some careful gymnastics on the entry and return trampolines specifically for SYSENTER, while the second is a nasty case that is incidentally made easier by using IST. With IST, we can simply make the debug trap trampoline check for whether we took the trap in another trampoline's code, and reset %cr3 and the destination stack. This works for single-stepping into any of the handlers, not just the one for SYSENTER. To make debugging easier, we decided that traps like the debug/single-step trap (as well as faults like page faults, #GP, etc.) would push their interrupt frame in a different part of the KPTI state page to normal interrupts. We applied this change to all the traps that can interrupt another trampoline (based on the instructions we used). These "paranoid" traps also set a flag in the KPTI struct to mark it busy (and jump to the double-fault handler if it is), to work around some bugs where double-faults are not correctly generated. It's been a long and busy two months, with lots of time spent building, testing, and validating the code. We've run it on as many kinds of machines as we could get our hands on, to try to make sure we catch issues. The time we've spent on this has been validated several times in the process by finding bugs that could have been nasty in production. One great example: our patches on Westmere-EP Xeons were causing busy machines to throw a lot of L0 I-cache parity errors. This seemed very mysterious at first, and it took us a few times seeing it to believe that it was actually our fault. This was actually caused by the accidental activation of a CPU errata for Westmere (B52, "Memory Aliasing of Code Pages May Cause Unpredictable System Behaviour") -- it turned out we had made a typo and put the "cacheable" flag into a variable named flags instead of attrs where it belonged when setting up the page tables. This was causing performance degradation on other machines, but on Westmere it causes cache parity errors as well. This is a great example of the surprising consequences that small mistakes in this kind of code can end up having. In the end, I'm glad that that erratum existed, otherwise it may have been a long time before we caught that bug. As of this week, Mike and Philip have committed the OpenBSD patches for KPTI to their repository, and the patches for illumos are out for review. It's a nice kind of symmetry that the two projects who started on the work together after the public disclosure at the same time are both almost ready to ship at the same time at the other end. I'm feeling hopeful, and looking forward to further future collaborations like this with our cousins, the BSDs. The IllumOS work has since landed, on March 12th (https://github.com/joyent/illumos-joyent/commit/d85fbfe15cf9925f83722b6d62da49d549af615c) *** OpenBSD Email Service (https://github.com/vedetta-com/caesonia) Features Efficient: configured to run on min. 512MB RAM and 20GB SSD, a KVM (cloud) VPS for around $2.50/mo 15GB+ uncompressed Maildir, rivals top free-email providers (grow by upgrading SSD) Email messages are gzip compressed, at least 1/3 more space with level 6 default Server side full text search (headers and body) can be enabled (to use the extra space) Mobile data friendly: IMAPS connections are compressed Subaddress (+tag) support, to filter and monitor email addresses Virtual domains, aliases, and credentials in files, Berkeley DB, or SQLite3 Naive Bayes rspamd filtering with supervised learning: the lowest false positive spam detection rates Carefree automated Spam/ and Trash/ cleaning service (default: older than 30 days) Automated quota management, gently assists when over quota Easy backup MX setup: using the same configuration, install in minutes on a different host Worry-free automated master/master replication with backup MX, prevents accidental loss of email messages Resilient: the backup MX can be used as primary, even when the primary is not down, both perfect replicas Flexible: switching roles is easy, making the process of changing VPS hosts a breeze (no downtime) DMARC (with DKIM and SPF) email-validation system, to detect and prevent email spoofing Daily (spartan) stats, to keep track of things Your sieve scripts and managesieve configuration, let's get started Considerations By design, email message headers need to be public, for exchanges to happen. The body of the message can be encrypted by the user, if desired. Moreover, there is no way to prevent the host from having access to the virtual machine. Therefore, full disk encryption (at rest) may not be necessary. Given our low memory requirements, and the single-purpose concept of email service, Roundcube or other web-based IMAP email clients should be on a different VPS. Antivirus software users (usually) have the service running on their devices. ClamAV can easily be incorporated into this configuration, if affected by the types of malware it protects against, but will require around 1GB additional RAM (or another VPS). Every email message is important, if properly delivered, for Bayes classification. At least 200 ham and 200 spam messages are required to learn what one considers junk. By default (change to use case), a rspamd score above 50% will send the message to Spam/. Moving messages in and out of Spam/ changes this score. After 95%, the message is flagged as "seen" and can be safely ignored. Spamd is effective at greylisting and stopping high volume spam, if it becomes a problem. It will be an option when IPv6 is supported, along with bgp-spamd. System mail is delivered to an alias mapped to a virtual user served by the service. This way, messages are guaranteed to be delivered via encrypted connection. It is not possible for real users to alias, nor mail an external mail address with the default configuration. e.g. puffy@mercury.example.com is wheel, with an alias mapped to (virtual) puffy@example.com, and user (puffy) can be different for each. Interview - Ryan Zezeski - rpz@joyent.com (mailto:rpz@joyent.com) / @rzezeski (https://twitter.com/rzezeski) News Roundup John Carmack's programming retreat to hermit coding with OpenBSD (https://www.facebook.com/permalink.php?story_fbid=2110408722526967&id=100006735798590) After a several year gap, I finally took another week-long programming retreat, where I could work in hermit mode, away from the normal press of work. My wife has been generously offering it to me the last few years, but I'm generally bad at taking vacations from work. As a change of pace from my current Oculus work, I wanted to write some from-scratch-in-C++ neural network implementations, and I wanted to do it with a strictly base OpenBSD system. Someone remarked that is a pretty random pairing, but it worked out ok. Despite not having actually used it, I have always been fond of the idea of OpenBSD — a relatively minimal and opinionated system with a cohesive vision and an emphasis on quality and craftsmanship. Linux is a lot of things, but cohesive isn't one of them. I'm not a Unix geek. I get around ok, but I am most comfortable developing in Visual Studio on Windows. I thought a week of full immersion work in the old school Unix style would be interesting, even if it meant working at a slower pace. It was sort of an adventure in retro computing — this was fvwm and vi. Not vim, actual BSD vi. In the end, I didn't really explore the system all that much, with 95% of my time in just the basic vi / make / gdb operations. I appreciated the good man pages, as I tried to do everything within the self contained system, without resorting to internet searches. Seeing references to 30+ year old things like Tektronix terminals was amusing. I was a little surprised that the C++ support wasn't very good. G++ didn't support C++11, and LLVM C++ didn't play nicely with gdb. Gdb crashed on me a lot as well, I suspect due to C++ issues. I know you can get more recent versions through ports, but I stuck with using the base system. In hindsight, I should have just gone full retro and done everything in ANSI C. I do have plenty of days where, like many older programmers, I think “Maybe C++ isn't as much of a net positive as we assume...”. There is still much that I like, but it isn't a hardship for me to build small projects in plain C. Maybe next time I do this I will try to go full emacs, another major culture that I don't have much exposure to. I have a decent overview understanding of most machine learning algorithms, and I have done some linear classifier and decision tree work, but for some reason I have avoided neural networks. On some level, I suspect that Deep Learning being so trendy tweaked a little bit of contrarian in me, and I still have a little bit of a reflexive bias against “throw everything at the NN and let it sort it out!” In the spirit of my retro theme, I had printed out several of Yann LeCun's old papers and was considering doing everything completely off line, as if I was actually in a mountain cabin somewhere, but I wound up watching a lot of the Stanford CS231N lectures on YouTube, and found them really valuable. Watching lecture videos is something that I very rarely do — it is normally hard for me to feel the time is justified, but on retreat it was great! I don't think I have anything particularly insightful to add about neural networks, but it was a very productive week for me, solidifying “book knowledge” into real experience. I used a common pattern for me: get first results with hacky code, then write a brand new and clean implementation with the lessons learned, so they both exist and can be cross checked. I initially got backprop wrong both times, comparison with numerical differentiation was critical! It is interesting that things still train even when various parts are pretty wrong — as long as the sign is right most of the time, progress is often made. I was pretty happy with my multi-layer neural net code; it wound up in a form that I can just drop it into future efforts. Yes, for anything serious I should use an established library, but there are a lot of times when just having a single .cpp and .h file that you wrote ever line of is convenient. My conv net code just got to the hacky but working phase, I could have used another day or two to make a clean and flexible implementation. One thing I found interesting was that when testing on MNIST with my initial NN before adding any convolutions, I was getting significantly better results than the non-convolutional NN reported for comparison in LeCun ‘98 — right around 2% error on the test set with a single 100 node hidden layer, versus 3% for both wider and deeper nets back then. I attribute this to the modern best practices —ReLU, Softmax, and better initialization. This is one of the most fascinating things about NN work — it is all so simple, and the breakthrough advances are often things that can be expressed with just a few lines of code. It feels like there are some similarities with ray tracing in the graphics world, where you can implement a physically based light transport ray tracer quite quickly, and produce state of the art images if you have the data and enough runtime patience. I got a much better gut-level understanding of overtraining / generalization / regularization by exploring a bunch of training parameters. On the last night before I had to head home, I froze the architecture and just played with hyperparameters. “Training!” Is definitely worse than “Compiling!” for staying focused. Now I get to keep my eyes open for a work opportunity to use the new skills! I am dreading what my email and workspace are going to look like when I get into the office tomorrow. Stack-register Checking (https://undeadly.org/cgi?action=article;sid=20180310000858) Recently, Theo de Raadt (deraadt@) described a new type of mitigation he has been working on together with Stefan Kempf (stefan@): How about we add another new permission! This is not a hardware permission, but a software permission. It is opportunistically enforced by the kernel. The permission is MAP_STACK. If you want to use memory as a stack, you must mmap it with that flag bit. The kernel does so automatically for the stack region of a process's stack. Two other types of stack occur: thread stacks, and alternate signal stacks. Those are handled in clever ways. When a system call happens, we check if the stack-pointer register points to such a page. If it doesn't, the program is killed. We have tightened the ABI. You may no longer point your stack register at non-stack memory. You'll be killed. This checking code is MI, so it works for all platforms. For more detail, see Theo's original message (https://marc.info/?l=openbsd-tech&m=152035796722258&w=2). This is now available in snapshots, and people are finding the first problems in the ports tree already. So far, few issues have been uncovered, but as Theo points out, more testing is necessary: Fairly good results. A total of 4 problems have been found so far. go, SBCL, and two cases in src/regress which failed the new page-alignment requirement. The SBCL and go ones were found at buildtime, since they use themselves to complete build. But more page-alignment violations may be found in ports at runtime. This is something I worry about a bit. So please everyone out there can help: Use snapshots which contain the stack-check diff, update to new packages, and test all possible packages. Really need a lot of testing for this, so please help out. So, everybody, install the latest snapshot and try all your favorite ports. This is the time to report issues you find, so there is a good chance this additional security feature is present in 6.3 (and works with third party software from packages). NomadBSD 1.0 has been released (https://freeshell.de/~mk/projects/nomadbsd.html) NomadBSD is a live system for flash drives, based on FreeBSD® 11.1 (amd64) Change Log The setup process has been improved. Support for optional geli encryption of the home partition has been added Auto-detection of NVIDIA graphics cards and their corresponding driver has been added. (Thanks to holgerw and lme from BSDForen.de) An rc script to start the GEOM disk scheduler on the root device has been added. More software has been added: accessibility/redshift (starts automatically) audio/cantata audio/musicpd audio/ncmpc ftp/filezilla games/bsdtris mail/neomutt math/galculator net-p2p/transmission-qt5 security/fpm2 sysutils/bsdstats x11/metalock x11/xbindkeys Several smaller improvements and bugfixes. Screenshots https://freeshell.de/~mk/projects/nomadbsd-ss1.png https://freeshell.de/~mk/projects/nomadbsd-ss2.png https://freeshell.de/~mk/projects/nomadbsd-ss3.png https://freeshell.de/~mk/projects/nomadbsd-ss4.png https://freeshell.de/~mk/projects/nomadbsd-ss5.png https://freeshell.de/~mk/projects/nomadbsd-ss6.png Beastie Bits KnoxBug - Nagios (http://knoxbug.org/2018-03-27) vBSDcon videos landing (https://www.youtube.com/playlist?list=PLfJr0tWo35bc9FG_reSki2S5S0G8imqB4) AsiaBSDCon 2017 videos (https://www.youtube.com/playlist?list=PLnTFqpZk5ebBTyXedudGm6CwedJGsE2Py) DragonFlyBSD Adds New "Ptr_Restrict" Security Option (https://www.phoronix.com/scan.php?page=news_item&px=DragonFlyBSD-Ptr-Restrict) A Dexter needs your help (https://twitter.com/michaeldexter/status/975603855407788032) Mike Larkin at bhyvecon 2018: OpenBSD vmm(4) update (https://undeadly.org/cgi?action=article;sid=20180309064801) [HEADS UP] - OFED/RDMA stack update (https://lists.freebsd.org/pipermail/freebsd-arch/2018-March/018900.html) *** Feedback/Questions Ron - Interview someone using DragonflyBSD (http://dpaste.com/3BM6GSW#wrap) Brad - Gaming and all (http://dpaste.com/3X4ZZK2#wrap) Mohammad - Sockets vs TCP (http://dpaste.com/0PJMKRD#wrap) Paul - All or at least most of Bryan Cantrill's Talks (http://dpaste.com/2WXVR1X#wrap) ***
An interview with David Hardten, MD. Dr. David Hardten, program director for the 2011 Refractive Surgery Subspecialty Day, discusses the highlights of this ISRS-sponsored Annual Meeting, including hotly debated topics on detecting forme fruste keratoconus and wave-front guided ablation, updates on corneal cross-linking and presbyopia correction as well as advice on using femtosecond lasers for astigmatic keratotomy. (November 2011)
An interview with John Kanellopoulos, MD. Epi-on or Epi-off? Should topography-guided PRK occur before or after cross-linking? How about intracorneal rings? Are surgeons cross-linking too much tissue? What exactly is keratoconus? Drs. A. John Kanellopoulos and Theo Seiler brought together world-wide experts to discuss these questions at an International Society of Refractive Surgery symposium during the 2010 Joint Meeting in Chicago. (December 2010)
Commentary from William Trattler, MD. This year
Cet enregistrement représente mon interprétation entant qu'étudiant en 2e année de Médecine. Il est fourni entant qu'aide à l'étude et n'a pas comme objectif d'être une ressource primaire. Il ne constitue pas un avis médical. Cet épisode discute les antipsychotiques typiques et atypiques, les antidépresseurs (ISRS, ISNS, iMAO, ATC, Bupropion), les stabilisateurs de l'humeur et les anxiolytiques.