Podcasts about Stack Exchange

Network of Q&A sites based in New York City

  • 102PODCASTS
  • 316EPISODES
  • 52mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Stack Exchange

Latest podcast episodes about Stack Exchange

Casual Inference
Propensity Scores, R Packages, and Practical Advice with Noah Greifer | Season 6 Episode 3

Casual Inference

Play Episode Listen Later Apr 10, 2025 82:09


Noah Greifer is a statistical consultant and programmer at Harvard University. Episode notes: WeightIt package: https://ngreifer.github.io/WeightIt/ MatchIt package: https://kosukeimai.github.io/MatchIt/ Noah's awesome Stack Exchange post: https://stats.stackexchange.com/a/544958 Follow along on Bluesky: Noah: @noahgreifer.bsky.social Ellie: @EpiEllie.bsky.social Lucy: @LucyStats.bsky.social

Agent Survival Guide Podcast
5 Insurance Marketing Tips to Help Agents Stand Out in a Crowd

Agent Survival Guide Podcast

Play Episode Listen Later Mar 21, 2025 13:04


  Why fit in when you can stand out and really shine? We highlight strategies insurance agents can use to elevate your marketing and set yourself apart in this industry.   Read the text version   Expand Your Insurance Industry Knowledge with Knight School!   Resources: 5 Tips for Creating Your Personal Brand: https://ritterim.com/blog/5-tips-for-creating-your-personal-brand/ Best Practices for Writing an Email to Your Insurance Clients: https://ritterim.com/blog/best-practices-for-writing-an-email-to-your-insurance-clients/   Build Your Brand with Community Involvement: https://ritterim.com/blog/build-your-brand-with-community-involvement/  CMS 2025 Marketplace Integrity & Affordability Proposed Rule: https://lnk.to/asgf20250314 Diversify Your Insurance Portfolio & Reap Real Rewards: https://lnk.to/asg651 How Professional Organizations Make You a Better Agent: https://ritterim.com/blog/how-professional-organizations-make-you-a-better-agent/ Keys to Client Retention: Face-to-Face Communication: https://ritterim.com/blog/keys-to-client-retention-face-to-face-communication/ Keys to Client Retention: Digital Communication: https://ritterim.com/blog/keys-to-client-retention-digital-communication/ Knight School Online Training: https://ritterim.com/knight-school/ Register with Ritter Insurance Marketing: https://app.ritterim.com/public/registration/  Should You Become a Certified Insurance Counselor? https://ritterim.com/blog/should-you-become-a-certified-insurance-counselor/   References: Bicaku, Enina. “16 Striking Business Card Trends of 2025 (+ 54 Examples).” Looka.Com, Looka, 2 Feb. 2025, looka.com/blog/business-card-trends/. Khatri, Dimple. “28 Business Card Statistics That Will Surprise You in 2024.” Printtobrand.Com, Print To Brand, 2 Jan. 2025, printtobrand.com/business-cards-statistics/. “AHIP.” Ahip.Org, AHIP, www.ahip.org/. Accessed 18 Mar. 2025. “Business Cards Templates & Designs.” Vistaprint.Com, VistaPrint, www.vistaprint.com/business-cards/standard/templates. Accessed 18 Mar. 2025. “English Language & Usage Stack Exchange.” English.Stackexchange.Com/, English Language & Usage Stack Exchange, english.stackexchange.com/. Accessed 18 Mar. 2025. “Fonts That Get Your Business Card Noticed.” Psprint.Com, PsPrint, www.psprint.com/resources/powerful-business-card-fonts/. Accessed 18 Mar. 2025. “Grammarly.” Grammarly.Com, Grammarly, www.grammarly.com/. Accessed 18 Mar. 2025. “Grammarly Blog.” Grammarly.Com, Grammarly, www.grammarly.com/blog/. Accessed 18 Mar. 2025. “How to Choose the Perfect Colors for Your Business Card.” Vistaprint.Com, VistaPrint, www.vistaprint.com/hub/business-card-colors. Accessed 18 Mar. 2025. Laws, Jasmine. “Map Shows Most Spoken Languages in Each State Besides English and Spanish.” Newsweek.Com, Newsweek, 3 Dec. 2024, www.newsweek.com/map-shows-most-spoken-languages-each-state-besides-english-spanish-1993046. “NABIP: Who We Are.” NABIP.Org, NABIP, nabip.org/who-we-are. Accessed 18 Mar. 2025. Naifa. “National Association of Insurance and Financial Advisors.” Naifa.Org, National Association of Insurance and Financial Advisors, belong.naifa.org/. Accessed 18 Mar. 2025. Fogarty, Mignon. “Grammer Girl Podcast.” Quickanddirtytips.Com, Quick and Dirty Tips, www.quickanddirtytips.com/grammar-girl/. Accessed 18 Mar. 2025. Venditti, Bruno. “The Most Spoken Language in Every U.S. State (Besides English and Spanish).” Visualcapitalist.Com, Visual Capitalist, 31 Oct. 2023, www.visualcapitalist.com/cp/the-most-spoken-language-in-every-u-s-state-besides-english-and-spanish/. Geri. “The Psychology of Colors in Business Cards.” Clcme.Eu, Click Me Smart Card, 19 Feb. 2025, clcme.eu/the-psychology-of-colors-in-business-cards/. Lockwood, Amy. “Why Business Cards Are Still Relevant in 2025.” Emailsignaturerescue.Com, Email Signature Rescue, 18 Dec. 2024, www.emailsignaturerescue.com/blog/why-business-cards-are-still-relevant-in-2025.   Follow Us on Social! Ritter on Facebook, https://www.facebook.com/RitterIM Instagram, https://www.instagram.com/ritter.insurance.marketing/ LinkedIn, https://www.linkedin.com/company/ritter-insurance-marketing TikTok, https://www.tiktok.com/@ritterim X, https://x.com/RitterIM and Youtube, https://www.youtube.com/user/RitterInsurance     Sarah on LinkedIn, https://www.linkedin.com/in/sjrueppel/ Instagram, https://www.instagram.com/thesarahjrueppel/ and Threads, https://www.threads.net/@thesarahjrueppel  Tina on LinkedIn, https://www.linkedin.com/in/tina-lamoreux-6384b7199/   Contact the Agent Survival Guide Podcast! Email us ASGPodcast@Ritterim.com or call 1-717-562-7211 and leave a voicemail.   Not affiliated with or endorsed by Medicare or any government agency.

Satansplain
Satansplain #083 - Listener Mail (ritual, The Satanic Witch, George Carlin, Intellectual Black Holes)

Satansplain

Play Episode Listen Later Feb 3, 2025 42:01


Satansplain responds to mail from the listeners, including such topics as: Satanic ritual, The Satanic Witch, George Carlin, intellectual black holes, the fight against Satanic misinformation, and why I do what I do. https://satansplain.locals.com/support  00:00 - Intro 01:24 - Using paper in ritual 04:00 - "Top Fan Badge"? 05:21 - "What would LaVey think of the world today?" 10:12 - George Carlin and Anton LaVey 15:15 - Satanecdote 21:45 - Dumbing it Down? 31:54 - Praise for StackExchange 34:12 - Battles Worth Fighting

Bitcoin Takeover Podcast
S15 E50: Murch on Popular Bitcoin Myths

Bitcoin Takeover Podcast

Play Episode Listen Later Aug 15, 2024 143:33


Murch is a Bitcoin Core contributor, best known for his engineering work at Chaincode Labs & his prolific answers on Bitcoin's Stack Exchange page. In this episode, he breaks myths about ordinals, OP_CAT, Drivechains, 0conf, ossification & more!

Quant Trading Live Report
Understanding the Perfect R-Squared: A Quant Interview Deep Dive

Quant Trading Live Report

Play Episode Listen Later Jun 12, 2024 8:14 Transcription Available


Join Brian from Quantlabsnet.com as he delves into a thought-provoking quant interview question sourced from StackExchange. In this episode, recorded on June 12th, Brian breaks down the concept of R-squared (R2) and its significance in statistical models, particularly in the context of investing. Brian explains the definition and calculation of R-squared, emphasizing how a perfect R2 value of 1 indicates that all movements of a security are completely explained by an independent variable. He discusses the implications of a high R2 value and the potential pitfalls, such as spurious regression. The episode explores various responses to the interview question, including the irony of needing an expected value when you already know the outcome. Brian also covers practical considerations like trading fees and taxes that can affect real-world applications of these models. Whether you're preparing for a quant interview or just curious about advanced statistical measures in finance, this episode offers valuable insights. For more detailed discussions and resources, visit quantlabsnet.com.

Quant Trading Live Report
The Ultimate Guide to Simple Momentum Strategies and Market Analysis

Quant Trading Live Report

Play Episode Listen Later Jun 4, 2024 16:56 Transcription Available


Welcome to a comprehensive episode where Brian from FontLabsNet.com delves into a series of insightful articles focusing on strategy, development, and allocation in the financial markets. This episode covers key topics such as macroeconomic analysis, technical indicators, and the efficacy of simple momentum strategies. LEARN | Quantlabs (quantlabsnet.com) Dive into Effective Trading Algorithms and Simple Momentum Strategies (quantlabsnet.com) In the first segment, we explore an article from PriceActionLab.com that highlights the use of simple technical indicators for tracking market momentum. Brian discusses how a 12-month moving average model has shown promising results, even outperforming more complex strategies in certain scenarios. The episode continues with a critical examination of why macroeconomic market analysts often dismiss other methods, particularly systematic trading. Brian shares his own experiences and insights, emphasizing the importance of both fundamental and technical analysis for market timing and selection. Next, we shift focus to a DIY trend-following asset allocation strategy from AlphaArchitect.com. Brian outlines the current exposure recommendations for various asset classes, including domestic and international equities, REITs, commodities, and bonds. He provides guidance on how to balance these allocations based on different risk profiles. The episode wraps up with a deep dive into mathematical modeling and spread calculations, featuring discussions from Quant.StackExchange.com. Brian addresses complex questions on modeling bid and ask processes and calculating spreads for trading strategies, offering practical advice for managing market noise and volatility. Tune in for a wealth of knowledge on market strategies, backed by real-world examples and expert analysis. Don't miss out on this informative episode that promises to enhance your understanding of market dynamics and trading methodologies.

Investigando la investigación
312. Afina tu experiencia en investigación con Academia Stack Exchange

Investigando la investigación

Play Episode Listen Later Jun 4, 2024 13:23


Hoy hablamos sobre la gran utilidad que tienen plataformas como Academia Stack Exchange para los investigadores. Estas nos permiten tanto hacer preguntas sobre temas de investigación que nos interesen o preocupen, como también responder las dudas que plantean otros colegas. Al participar, ganamos puntos y reputación dentro de la comunidad. Responder preguntas de otros, aunque no seamos expertos en la materia, tiene varios beneficios. En primer lugar, desarrolla nuestras capacidades analíticas. Además, leer las respuestas de otros usuarios puede generar nuevas ideas y perspectivas que quizás no habíamos contemplado. Por otra parte, el ejercicio de escribir nuestras propias respuestas mejora nuestra capacidad de verbalizar y comunicar de forma efectiva. Otro gran beneficio de participar en estos foros es la oportunidad de hacer networking. Podemos conectar con otros investigadores con intereses similares, e incluso si una pregunta genera una discusión interesante, se puede proponer continuar la conversación en otro formato como una llamada o colaboración. Incluso si no nos sentimos capaces de dar la mejor respuesta, el simple hecho de intentarlo nos proporcionará un valioso feedback. Por eso te animo a participar al menos una vez por semana, ya sea planteando tus propias dudas o respondiendo a las de otros. Esta práctica puede tener un efecto muy positivo y multiplicador en tu carrera como investigador. Enlaces y recursos mencionados: Academia Stack Exchange: https://academia.stackexchange.com/ Otras plataformas (menos recomendadas para temas académicos): ResearchGate, Reddit, Twitter, LinkedIn Finalmente, te invito a reflexionar sobre cómo aplicarías estas estrategias y a unirte libremente a nuestra comunidad de investigadores para discutir más sobre este tema a través del siguiente enlace: https://chat.whatsapp.com/BIfSH9QFEiK9hiS83fw2am o https://horacio-ps.com/comunidad --- Send in a voice message: https://podcasters.spotify.com/pod/show/horacio-ps/message

Hackaday Podcast
Ep 272: Desktop EDM, Silence of the Leaves, and the Tyranny of the Rocket Equation

Hackaday Podcast

Play Episode Listen Later May 24, 2024 76:18


With Elliot off on vacation, Tom and Dan made a valiant effort to avoid the dreaded "clip show" and provide you with the tastiest hacker treats of the week. Did they succeed? That's not for us to say, but if you're interested in things like non-emulated N64 games and unnecessarily cool filament sensors, this just might be one to check out. We also came across a noise suppressor for a leaf blower, giant antennae dangling from government helicopters, and a desktop-friendly wire EDM setup that just might change the face of machining. We waxed on about the difference between AI-generated code and just pulling routines from StackExchange, came to the conclusion that single-stage-to-orbit is basically just science fiction, and took a look at the latest eclipse from 80,000 feet, albeit a month after the fact.

Postgres FM
Why isn't Postgres using my index?

Postgres FM

Play Episode Listen Later Feb 23, 2024 35:25


Nikolay and Michael discuss a common question — why Postgres isn't using an index, and what you can do about it! Here are some links to things they mentioned:Why isn't Postgres using my index? (blog post by Michael) https://www.pgmustard.com/blog/why-isnt-postgres-using-my-index Why isn't Postgres using my functional index? (Stack Exchange question from Brent Ozar) https://dba.stackexchange.com/questions/336019/why-isnt-postgres-using-my-functional-index  enable_seqscan (and similar parameters) https://www.postgresql.org/docs/current/runtime-config-query.html Crunchy Bridge changed random_page_cost to 1.1 https://docs.crunchybridge.com/changelog#postgres_random_page_cost_1_1 Make indexes invisible (trick from Haki Benita) https://hakibenita.com/sql-tricks-application-dba#make-indexes-invisible ANALYZE https://www.postgresql.org/docs/current/sql-analyze.htmlStatistics used by the planner https://www.postgresql.org/docs/current/planner-stats.html Our episode on query hints https://postgres.fm/episodes/query-hints transaction_timeout (commit for Postgres 17) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=51efe38cb92f4b15b68811bcce9ab878fbc71ea5 What's new in the Postgres 16 query planner / optimizer (blog post by David Rowley) https://www.citusdata.com/blog/2024/02/08/whats-new-in-postgres-16-query-planner-optimizer/ ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artwork 

Screaming in the Cloud
The World of Salesforce Cloud Development with Evelyn Grizzle

Screaming in the Cloud

Play Episode Listen Later Jul 27, 2023 31:41


Evelyn Grizzle, Senior Salesforce Developer, joins Corey on Screaming in the Cloud to discuss the often-misunderstood and always exciting world of Salesforce development. Evelyn explains why Salesforce Development is still seen as separate from traditional cloud development, and describes the work of breaking down barriers and silos between Salesforce developers and engineering departments. Corey and Evelyn discuss how a non-traditional background can benefit people who want to break into tech careers, and Evelyn reveals the best parts of joining the Salesforce community. About EvelynEvelyn is a Salesforce Certified Developer and Application Architect and 2023 Salesforce MVP Nominee. They enjoy full stack Salesforce development, most recently having built a series of Lightning Web Components that utilize a REST callout to a governmental database to verify the licensure status of a cannabis dispensary. An aspiring Certified Technical Architect candidate, Evelyn prides themself on deploying secure and scalable architecture. With over ten years of customer service experience prior to becoming a Salesforce Developer, Evelyn is adept at communicating with both technical and non-technical internal and external stakeholders. When they are not writing code, Evelyn enjoys coaching for RADWomenCode, mentoring through the Trailblazer Mentorship Program, and rollerskating.Links Referenced: Another Salesforce Blog: https://anothersalesforceblog.com RAD Women Code: https://radwomen.org/ Personal Website: https://evelyn.fyi LinkedIn: https://www.linkedin.com/in/evelyngrizzle/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and this is Screaming in the Cloud. But what do we mean by cloud? Well, people have the snarky answer of, it's always someone else's computer. I tend to view it through a lens of being someone else's call center, which is neither here nor there.But it all seems to come back to Infrastructure as a Service, which is maddeningly incomplete. Today, we're going in a slightly different direction in the world of cloud. My guest today is Evelyn Grizzle, who, among many other things, is also the author of anothersalesforceblog.com. I want to be clear, that is not me being dismissive. That is the actual name of the blog. Evelyn, thank you for joining me.Evelyn: Hi, Corey. Thank you for having me.Corey: So, I want to talk a little bit about one of the great unacknowledged secrets of the industry, which is that every company out there, sooner or later, uses Salesforce. They talk about their cloud infrastructure, but Salesforce is nowhere to be seen in it. But, for God's sake, at The Duckbill Group, we are a Salesforce customer. Everyone uses Salesforce. How do you think that wound up not being included in the narrative of cloud in quite the same way as AWS or, heaven forbid, Azure?Evelyn: So, Salesforce is kind of at the proverbial kid's table in terms of the cloud infrastructure at most companies. And this is relatively because the end-users are, you know, sales reps. We've got people in call centers who are working on Salesforce, taking in information, taking in leads, opportunities, creating accounts for folks. And it's kind of seen as a lesser service because the primary users of Salesforce are not necessarily the techiest people on the planet. So, I am really passionate about, like, making sure that end-users are respected.Salesforce actually just added a new certification, the Sales Representative Certification that you can get. That kind of gives you insight to what it's like to use Salesforce as an end-user. And given that Salesforce is for sales, a lot of times Salesforce is kind of grouped under the Financial Services portion of a company as opposed to, like, engineering. So again, kind of at the proverbial kid's table; we're over in finance, and the engineering team who's working on the website, they have their engineering stuff.And a lot of people don't really know what Salesforce is. So, to give a rundown, basically, Salesforce development is, I lovingly referred to it as bastard Java full-stack development. Apex, the proprietary language, is based in Java, so you have your server-side Java interface with the Salesforce relational database. There's the Salesforce Object Query Language and Salesforce Object Search Language that you can use to interact with the database. And then you build out front-end components using HTML and JavaScript, which a lot of people don't know.So, it's not only an issue of the end-users are call center reps, their analysts, they're working on stuff that isn't necessarily considered techie, but there's also kind of an institutional breakdown of, like, what is Salesforce? This person is just dragging and dropping when that isn't true. It's actually, you know, we're writing code, we're doing stuff, we're basically writing full-stack Java. So, I like to call that out.Corey: I mean, your undergraduate degree is in network engineering, let's be very clear. This is—I'm not speaking to you as someone who's non-technical trying to justify what they do as being technical. You have come from a very deep place that no one would argue is, “Well, that's not real computering.” Oh, I assure you, networking is very much real computering, and so is Salesforce. I have zero patience for this gatekeeping nonsense we see in so many areas of tech, but I found this out firsthand when we started trying to get set up with Salesforce here. It took wailing and gnashing of teeth and contractor upon contractor. Some agencies did not do super well, some people had to come in and rescue the project. And now it mostly—I think—works.Evelyn: Yeah, and that's what we go for. And actually, so my degree is in network engineering, but an interesting story about me. I actually went to school for chemical engineering. I hated it. It was the worst. And I dropped out of school did, like, data analytics for a while. Worked my way up as a call center rep at a telephone company and made a play into database administration. And because I was working at the phone company, my degree is in network engineering because I was like, “I want to work at the phone company forever.” Of course that did not pan out. I got a job doing Salesforce development and really enjoy it. There's always something to learn. I taught myself Salesforce while I was working at IBM, and with the Blue Wolf department that… they're a big Salesforce consulting shop at IBM, and through their guidance and tutelage, I guess, I did a lot of training and worked up on Salesforce. And it's been a lot of fun.Corey: I do feel that I need to raise my hand here and say that I am in the group you described earlier of not really understanding what Salesforce is. My first real exposure to Salesforce in anything approaching a modern era was when I was at a small consulting company that has since been bought by IBM, which rather than opine on that, what I found interesting was the Salesforce use case where we wound up using that internally to track where all the consultants were deployed, how they wound up doing on their most recent refresher skills assessment, et cetera, so that when we had something strange, like a customer coming in with, “I need someone who knows the AS/400 really well,” we could query that database internally and say, “Ah. We happen to have someone coming off of a project who does in fact, know how that system works. Let's throw them into the mix.” And that was incredibly powerful, but I never thought of it as being a problem that a tool that was aimed primarily at sales would be effective at solving. I was very clearly wrong.Evelyn: Yeah. So, the thing about Salesforce is there's a bunch of different clouds that you can access. So there's, like, Health Cloud, Service Cloud, Sales Cloud is the most common, you know, Salesforce, Sales Cloud, obviously. But Service Cloud is going to be a service-based Salesforce organization that allows you to track folks, your HR components, you're going to track your people. There's also Field Service Lightning.And an interesting use case I had for Field Service Lightning, which is a application that's built on top of Salesforce that allows field technicians to access Salesforce, one of the coolest projects I've built in my career so far is, the use case is, there's an HVAC company that wants to be able to charge customers when they go out into the field. And they want to have their technician pull out an iPad, swipe the credit card, and it charges the customer for however much duct tape they used, however much piping, whatever, duct work they do. Like I said, I'm a software engineer, I'm not a HVAC person, but—Corey: It's the AWS building equivalent for HVAC, as best I can tell. It's like all right, “By the metric foot-pound—” “Isn't that a torque measurement?” “Not anymore.” Yeah, that's how we're going to bill you for time and materials. It'll be great.Evelyn: Exactly. So, this project I built out, it connects with Square, which is awesome. And Field Service Lightning allows this technician to see where they're supposed to go on the map, it pulls up all the information, a trigger in Salesforce, an automation, pulls all the information into Field Service Lightning, and then you run the card, it webhooks into Square, you send the information back. And it was a really fun project to work on. So, that was actually a use case I had not thought of for Salesforce is, you know, being able to do something like this in the field and making a technician's job that much easier.Corey: That's really when I started to feel, as this Salesforce deployment we were doing here started rolling out, it wasn't just—my opinion on it was like, “Wait, isn't this basically just that Excel sheet somewhere that we can have?” And it starts off that way, sure, but then you have people—for example, we've made extensive use of aspects of this over on the media side of our business, where we have different people that we've reached out to who then matriculate on to other companies and become sponsors in that side of the world. And how do we track this? How do we wind up figuring out what's currently in flight that doesn't live in someone's head, or God forbid, email inbox? How do we start reasoning about these things in a more holistic way?We went in a slightly different direction before rolling it out to handle all of the production pieces and the various things we have in flight, but I could have easily seen a path whereas we instead went down that rabbit hole and used it as more or less the ERP, for lack of a better term, for running a services business.Evelyn: Yeah. And that is one thing you can use Salesforce as an ERP. FinancialForce, now Certinia, exists, so it is possible to use Salesforce as an ERP, but there's so much more to it than that. And Salesforce, at its heart, is a relational database with a fancy user interface. And when I say, “I'm a Salesforce developer,” they're like, “Oh, you work at Salesforce?” And I'm like, “No, not quite. I customize Salesforce for companies that purchase Salesforce as a Salesforce customer.”And the extensibility of the platform is really awesome. And you know, speaking of the external clients that want to use Salesforce, there's, like, Community Cloud where you can come in and have guest users. You can have your—if you are, say at a phone company, you can have a troubleshooting help center. You can have chatbots in Salesforce. I have a lot of friends who are working on AI chatbots with the Einstein AI within Salesforce, which is actually really cool. So, there is a lot of functionality that is extensible within Salesforce beyond just a basic Excel spreadsheet. And it's a lot of fun.Corey: If I pull up your website, anothersalesforceblog.com, one of the first things that you mentioned on the About the Author page just below the fold, is that you are an eight-time Salesforce Certified Developer and application architect. Like, wow, “Eight different certifications? What is this, AWS, on some level?”I think that there's not a broad level of awareness in the ecosystem, just how vast the Salesforce-specific ecosystem really is. It seems like there's an entire, I want to reprise the term that someone—I can't recall who—used to describe Dark Matter developers, the people that you don't generally see in most of the common developer watering holes like Stack Overflow, or historically shitposting on Twitter, but they're out there. They rock in, they do their jobs. Why is it that we don't see more Salesforce representation in, I guess, the usual tech watering holes?Evelyn: So, we do have a Stack Overflow, a Stack Exchange as well. They are separate entities that are within the greater Stack websites. And I assure you, there's lots of Salesforce shitposting on Twitter. I used to be very good at it, but no longer on Twitter due to personal reasons. We'll leave it at that.But yeah, Dreamforce is like a massive conference that happens in San Francisco every year. We are gearing up for that right now. And there's not a lot of visibility into Salesforce outside of that it feels like. It's kind of an insulated community. And that goes back to the Salesforce being at the kids' table in the engineering departments.And one of the things that I've been working on in my current role is really breaking down the barriers and the silos between the engineering department who's working on JavaScript, they're working on Node, they're working on HTML, they're, you know, building websites with React or whatever, and I'm coming in and saying, like, hey, we do the same thing. I can build a Heroku app in React, if I want to, I can do PHP, I can do this. And that's one of the cool things about Salesforce is some days I get to write in, like, five or six different languages if I want to. So, that is something that, there's not a lot of understanding. Because again, relational database with a fancy user interface.To the outside, it may seem like we're dragging and dropping stuff. Which yes, there is some stuff. I love Flows, which are… they're drag-and-drop automations that you can do within Salesforce that are actually really powerful. In the most recent update, you can actually do an HTTP call-out in a Flow, which is something that's, like, unheard of for a Salesforce admin with no coding background can come in, they can call an Apex class, they can do an HTTP call-out to an external resource and say, like, “Hey, I want to grab this information, pull it back into Salesforce, and get running off the ground with, like, zero development resources, if there are none available.”Corey: I want to call out just for people who think this is more niche than it really is. I live in San Francisco. And I remember back in pre-Covid times, back when Dreamforce was in town. I started seeing a bunch of, you know, nerdy-looking people with badges. Oh, it's a tech conference, what conference is it? It's something called Dreamforce for Salesforce.Oh, is that like the sad small equivalent of re:Invent in Las Vegas? And it's no, no, it's actually about three times the size. 170,000 people descend on San Francisco to attend this conference. It is massive. And it was a real eye-opener for me just to understand that. I mean, I have a background in sales before I got into tech and I did not realize that this entire ecosystem existed. It really does feel like it is more or less invisible and made me wonder what the hell else I'm missing, as I am too myopically focused on one particular giant cloud company to the point where it has now become a major facet of my personality.Evelyn: And that's the thing is there's all kinds of community events as well. So, I'm actually speaking at Forcelandia which, it's a Salesforce developer-focused event that is in Portland—Forcelandia, obviously—and I'm going to be speaking on a project that I built for my current company that is, like, REST APIs, we've got some encryption, we've got a front-end widget that you drop into a Salesforce object. Which, a Salesforce object is a table within the relational database, and being able to use polymorphic object relationships within Salesforce and really extending the functionality of Salesforce. So, if you're in Portland, I will be at Forcelandia on July 13th and I'm really excited about it.But it's this really cool ecosystem that, you know, there's events all over the world, every month, happening. And we've got Mile High Dreamin' coming up in August, which I'll be at as well, speaking there on how to break into the ecosystem from a non-tech role, which will be exciting. But yeah, it's a really vibrant community like, and it's a really close-knit community as well. Everyone is so super helpful. If I have a question on Stack Exchange, or, you know, back in my Twittering days, if I'd have something on Twitter, I could just post out and blast out, and the whole Salesforce community would come in with answers, which is awesome. I feel like the Stack Exchange is not the friendliest place on the planet, so to be able to have people who, like, I recognize that username and this person is going to come and help me out. And that's really cool. I like that about the Salesforce community.Corey: Yeah, a ding for a second on the whole Stack Exchange thing. That the Stack Overflow survey was fascinating, and last year, they showed that 92% of their respondents were male. So, this year, they fixed that problem and did not ask the question. So, I just refer to it nowadays as Stack Broverflow because that's exactly how it seems.Evelyn: [laugh].Corey: And that is a giant problem. I just didn't want that to pass uncommented-on in public. Thank you for giving me the opportunity to basically—Evelyn: Fair enough.Corey: —mouth off about that crappy misbehavior.Evelyn: Oh, yeah. No. And that's one of the things that I really like about the Salesforce community is there's actually, like, a huge movement towards gender equity and parity. So, one of the organizations that I'm involved with is RAD Women Code, which is a nonprofit that Angela Mahoney and a couple of other women started that it seeks to upskill women and other marginalized genders from Salesforce admins, which are your declarative users within Salesforce that set up the security settings, they set up the database relationships, they make metadata changes within Salesforce, and take that relational database knowledge and then upskill them into Salesforce developers.And right now, there is a two-part course that you can sign up for. If you have I believe it's a year or two of Salesforce admin experience and you are a woman or other marginalized gender, you can sign up and take part one, which is a very intro to computer programming, you go over the basics of object-oriented programming, a little bit of Java, a little bit of SOQL, which is the Salesforce Object Query Language. And then you build projects, which is really awesome, which is, like, the most effective way to learn is actually building stuff. And then the second part of the course is, like, a more advanced, like, let's get into our bash classes, which is like an automation that you can run every night. Let's do advanced object-oriented programming topics like abstraction and polymorphism. And being able to teach that is really fun.We're also planning on adding a third course, which is going to be the front-end development in Salesforce, which is your HTML, your JavaScript. Salesforce uses vanilla JavaScript, which I love, personally. I know I'm alone in that. I know that's the big meme on Facebook in the programming groups is ‘JavaScript bad,' but I have fun with it. There's a lot you can do with just native JavaScript in Salesforce. Like, you can grab the geolocation of a device and print it onto a Salesforce object record using just vanilla JavaScript. And it's been really helpful. I've done that a few times on various projects.But yeah, we're planning on adding a third course. We are currently getting ready to launch the pilot program on that for RAD Women Code. So, if you are listening to this, and you are a Salesforce admin who is a marginalized gender, definitely hit me up on LinkedIn and I will send you some information because it's a really good program and I love being able to help out with it.Corey: We'll definitely include links to that in the [show notes 00:18:59]. I mean, this does tie into the next question I have, which is, how do you go about giving a cohesive talk or even talking at all about Salesforce, given the tremendous variety in terms of technical skills people bring to bear with it, the backgrounds that they have going into it? It feels, on some level, like, it's only a half-step removed from, “So, you're into computers? Here's a conference for that.” Which I understand, let's be clear here, that I am speaking from the position of the AWS ecosystem, which is throwing stones in a very fragile glass house.Evelyn: Yeah, so again, I said this already. When I say I'm a Salesforce developer, people say, “Oh, you work at Salesforce. That is so cool.” And I have to say, “No, no. No working at Salesforce. I work on Salesforce in the proprietary system.” But there's always stuff to be learned. There's obviously, like, two releases a year where they send updates to the Salesforce software that companies are running on and working on computers is kind of how I sum it up, but yeah, I don't know [laugh].Corey: No, I think that's a fair place to come at from. It's, I think that we all have a bit of a bias in that we tend to assume that other people, in the absence of data to the contrary, have similar backgrounds and experiences to our own. And that means in many cases, we paper over things that are not necessarily true. We find ourselves biasing for people whose paths resemble our own, which is not inherently a bad thing until it becomes exclusionary. But it does tend to occlude the fact that there are many paths to this broader industry.Evelyn: Yeah. So, there is a term in the Salesforce ecosystem, we like to call people accidental admins, where they learn Salesforce on a job and like it so much that they become a Salesforce admin. And a lot of times these folks will then become developers and then architects, even, which is kind of how I got into it as well. I started at a phone company as a Salesforce end-user, worked my way up as a database admin, database coordinator doing e911 databases, and then transitioned into software engineering from there. So, there's a lot of folks who find themselves within the Salesforce ecosystem, and yeah, there are people with, like, bonafide top-ten computer science school degrees, and you know, we've got a fair bit of that, but one thing that I really like about the Salesforce ecosystem is because everyone's so friendly and helpful and because there's so many resources to upskill folks, it's really easy to get involved in the ecosystem.Like Trailhead, the training platform for Salesforce is entirely free. You can sign up for an account, you can learn anything on Salesforce from end-user stuff to Salesforce architecture and anything in between. So, that's how most people study for their certifications. And I love Trailhead. It's a very fun little modules.It gamifies learning and you get little, I call them Girl Scout badges because they resemble, you know, you have your Girl Scout vest and your Girl Scout sash, and you get the little badges. So, when you complete a project, you get a badge—or if you work on a big project, a super badge—that you can then put on your resume and say, “Hey, I built this 12-hour project in Salesforce Trailhead.” And some of them are required for certifications. So, you can say, “I did this. I got this certification, and I can actually showcase my skills and what I've been working on.”So, it really makes a good entrance to the ecosystem. Because there's a lot of people who want to break into tech that don't necessarily have that background that are able to do so and really, really shine. And I tell people, like, let's see, it's 2023. Eight years ago, I was a barista. I was doing undergraduate research and working in a coffee shop. And that's really helped me in my career.And a lot of people don't think about this, but the soft skills that you learn in, like, a food service job or a retail job are really helpful for communicating with those internal and external stakeholders, technical and non-technical stakeholders. And if you've ever been yelled at by a Karen on a Sunday morning, in a university town on graduation weekend, you can handle any project manager. So, that's one thing that, like, because there's so many resources in the ecosystem, there's so many people with so many varied backgrounds in the ecosystem, it's a really welcoming place. And there's not, like… I don't know, there's not a lot of, like, degree shaming or school shaming or background shaming that I feel happens in some other tech spaces. You know, I see your face you're making there. I know you know what I'm talking about. But—[laugh].Corey: I have an eighth-grade education on paper. My 20s were very interesting. Now, it's a fun story, but it was very tricky to get past a lot of that bias early on in my career. You're not wrong.Evelyn: Absolutely. And like I said, eight years ago, I was a barista. I went to school for chemical engineering. I have an engineering background, I have most of a chemical engineering degree. I just hated it so much.But getting into Salesforce honestly changed my life because I worked my way up from a call center as an end-user on Salesforce. Being able to say I have worked as a consultant. I have worked as a staff software engineer, I have worked at an ISV partner, which if you don't know what that is, Salesforce has an app store, kind of like the Google Play Store or the Apple App Store, but purely apps on Salesforce, and it's called the Salesforce App Exchange. So, if you have Salesforce, you can extend your functionality by adding an app from the App Exchange to if you want to use Salesforce as an ERP, for example, you can add the Certinia app from the App Exchange. And I've worked on AppExchange apps before, and now I'm like, making a big kid salary and, like, it's really, really kind of cool because ten years ago, I didn't think my life was going to be like this, and I owe it to—I'm going to give my old boss Scott Bell a shout out on this because he hired me, and I'm happy about it, so thank you, Scott for taking a chance and letting me learn Salesforce. Because now I'm on Screaming in the Cloud, which is really cool, so—talking about Salesforce, which is dorky, but it's really fun.Corey: If it works, what's wrong with it?Evelyn: Exactly.Corey: There's a lot to be said for helping people find a path forward. One of the things that I've always been taken aback by has been just how much small gestures can mean to people. I mean, I've had people thanked me for things I've done for them in their career that I don't even remember because it was, “You introduced me to someone once,” or, “You sat down with me at a conference and talked for 20 minutes about something that then changed the course of my career.” And honestly, I feel like a jerk when I don't remember some of these things, but it's a yeah, you asked me my opinion, I'm thrilled to give it to you, but the choices beyond that are yours. It still sticks out, though, that the things I do can have that level of impact for people.Evelyn: Yeah, absolutely. And that's one of the things about the Salesforce community is there are so many opportunities to make those potentially life-changing moments for people. You can give back by being a Trailblazer Mentor, you can sign up for Trailblazer Mentorship from any level of your career, from being a basic fresh, green admin to signing up for architecture lessons. And the highest level of certification in Salesforce is the Certified Technical Architect. There's, like, 300 of them in the world and there are nonprofits that are entirely dedicated to helping marginalized genders and women and black and indigenous people of color to make these milestones and go for the Certified Technical Architect certification.And there's lots of opportunities to give back and create those moments for people. And I spoke at Forcelandia last year, and one of the things that I did—it was the Women in Tech breakfast, and we went over my LinkedIn—which is apparently very good, so if you don't know what to do on LinkedIn, you can look at mine, it's fine—we went through LinkedIn and your search engine optimization in LinkedIn and how you can do this, and you know, how to get recruiters to look at your LinkedIn profile. And I went through my salary history of, like, this is how much I was making ten years ago, this is how much I'm making now, and this is how much I made at every job on the way. And we went through and did that. And I had, like, ten women come up to me afterwards and say, “I have never heard someone say outright their salary numbers before. And I don't know what to ask for when I'm in negotiations.”Corey: It's such a massive imbalance because all the companies know what other people are making because they get a holistic view. They know what they're paying across the board. I think a lot of the pay transparency movement has been phenomenal. I've been in situations before myself, where my boss walks up to me out of nowhere, and gives me a unsolicited $10,000 raise. It's, “Wow, thanks.” Followed immediately by, “Wait a minute.”Evelyn: Mm-hm.Corey: People generally don't do that out of the goodness of their hearts. How underpaid, am I? And every time it was, yeah, here's the $10,000 raise so you don't go get 30 somewhere else.Evelyn: Yeah. And that's one of the things that, like, going into job negotiations, women and people of marginalized genders will apply for jobs that they're a hundred percent qualified for, which means that they're not growing in their positions. So, if you're not kind of reaching when you're applying for positions, you're not going to get the salary you need, you're not going to get that career growth you need, whereas, not to play this card, but like, white men will go in and be, like, “I've got 60% of the qualifications. I'm going to ask for this much money.” And then they get it.And it's like, why don't I do that? It's, you know, societal whatever is pressuring me not to. And being able to talk transparently about that stuff is, like, so important. And these women just, like, went into salary negotiations a couple weeks later, and I had one of them message me and say, like, “Yeah, I asked for the number you said at this conference and I got it.” And I was like, “Yes! congratulations.” Because that is life-changing, especially, like, because so many of us come from non-technical backgrounds in Salesforce, you don't know how much money you can make in tech until you get it, and it's absolutely life-changing.Corey: Yeah, it's wild to me, but that's the way it works. I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Evelyn: So, I am reachable at anothersalesforceblog.com, and evelyn.fyi, E-V-E-L-Y-N dot F-Y-I, which actually just links back to another Salesforce blog, which is fine. But I'm really [laugh] reachable on LinkedIn and really active there, so if you need any Salesforce mentorship, I do that. And I love doing it because so many people have helped me in my career that it's really, like, anything I can do to give back. And that's really kind of the attitude of the Salesforce ecosystem, so definitely feel free to reach out.Corey: And we will, of course, put links to that in the [show notes 00:30:27]. Thank you so much for taking the time to, I guess, explain how an entire swath of the ecosystem views the world.Evelyn: Yeah, absolutely. Thank you for having me, Corey.Corey: Evelyn Grizzle, Senior Salesforce Developer. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that I will one day aggregate somewhere, undoubtedly within Salesforce.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

The Nonlinear Library
LW - MetaAI: less is less for alignment. by Cleo Nardo

The Nonlinear Library

Play Episode Listen Later Jun 14, 2023 11:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MetaAI: less is less for alignment., published by Cleo Nardo on June 13, 2023 on LessWrong. Summary In May 2023, MetaAI submitted a paper to arxiv called LIMA: Less Is More for Alignment. It's a pretty bad paper and (in my opinion) straightforwardly misleading. Let's get into it. The Superficial Alignment Hypothesis The authors present an interesting hypothesis about LLMs We define the Superficial Alignment Hypothesis: A model's knowledge and capabilities are learnt almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users. If this hypothesis is correct, and alignment is largely about learning style, then a corollary of the Superficial Alignment Hypothesis is that one could sufficiently tune a pretrained language model with a rather small set of examples. We hypothesize that alignment can be a simple process where the model learns the style or format for interacting with users, to expose the knowledge and capabilities that were already acquired during pretraining. (1) This hypothesis would have profound implications for AI x-risk It suggests that we could build a safe competent oracle by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of safe competent responses. It suggests that we could build an alignment researcher by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of alignment research. (2) Moreover, as by Ulisse Mini writes in their review of the LIMA paper, Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! (3) Finally, the hypothesis would've supported many of the intuitions in the Simulators sequence by Janus, and I share these intuitions. So I was pretty excited to read the paper! Unfortunately, the LIMA results were unimpressive upon inspection. MetaAI's experiment The authors finetune MetaAI's 65B parameter LLaMa language model on 1000 curated prompts and responses (mostly from StackExchange, wikiHow, and Reddit), and then compare it to five other LLMs (Alpaca 65B, DaVinci003, Bard, Claude, GPT4). Method: To compare LIMA to other models, we generate a single response for each test prompt. We then ask crowd workers to compare LIMA outputs to each of the baselines and label which one they prefer. We repeat this experiment, replacing human crowd workers with GPT-4, finding similar agreement levels. Results: In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Conclusion: The fact that simple fine-tuning over so few examples is enough to compete with the state of the art strongly supports the Superficial Alignment Hypothesis, as it demonstrates the power of pretraining and its relative importance over large-scale instruction tuning and reinforcement learning approaches. Problems with their experiment (1) Human evaluators To compare two chatbots A and B, you could ask humans whether they prefer A's response to B's response across 300 test prompts. But this is pretty bad proxy, because here's what users actually care about: What's the chatbots' accuracy on benchmark tests, e.g. BigBench, MMLU? Can the chatbot pass a law exam, or a medical exam? Can the chatbot write Python code that actually matches the specification? Can the chatbot perform worthwhi...

The Nonlinear Library: LessWrong
LW - MetaAI: less is less for alignment. by Cleo Nardo

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 14, 2023 11:39


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MetaAI: less is less for alignment., published by Cleo Nardo on June 13, 2023 on LessWrong. Summary In May 2023, MetaAI submitted a paper to arxiv called LIMA: Less Is More for Alignment. It's a pretty bad paper and (in my opinion) straightforwardly misleading. Let's get into it. The Superficial Alignment Hypothesis The authors present an interesting hypothesis about LLMs We define the Superficial Alignment Hypothesis: A model's knowledge and capabilities are learnt almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users. If this hypothesis is correct, and alignment is largely about learning style, then a corollary of the Superficial Alignment Hypothesis is that one could sufficiently tune a pretrained language model with a rather small set of examples. We hypothesize that alignment can be a simple process where the model learns the style or format for interacting with users, to expose the knowledge and capabilities that were already acquired during pretraining. (1) This hypothesis would have profound implications for AI x-risk It suggests that we could build a safe competent oracle by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of safe competent responses. It suggests that we could build an alignment researcher by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of alignment research. (2) Moreover, as by Ulisse Mini writes in their review of the LIMA paper, Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! (3) Finally, the hypothesis would've supported many of the intuitions in the Simulators sequence by Janus, and I share these intuitions. So I was pretty excited to read the paper! Unfortunately, the LIMA results were unimpressive upon inspection. MetaAI's experiment The authors finetune MetaAI's 65B parameter LLaMa language model on 1000 curated prompts and responses (mostly from StackExchange, wikiHow, and Reddit), and then compare it to five other LLMs (Alpaca 65B, DaVinci003, Bard, Claude, GPT4). Method: To compare LIMA to other models, we generate a single response for each test prompt. We then ask crowd workers to compare LIMA outputs to each of the baselines and label which one they prefer. We repeat this experiment, replacing human crowd workers with GPT-4, finding similar agreement levels. Results: In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Conclusion: The fact that simple fine-tuning over so few examples is enough to compete with the state of the art strongly supports the Superficial Alignment Hypothesis, as it demonstrates the power of pretraining and its relative importance over large-scale instruction tuning and reinforcement learning approaches. Problems with their experiment (1) Human evaluators To compare two chatbots A and B, you could ask humans whether they prefer A's response to B's response across 300 test prompts. But this is pretty bad proxy, because here's what users actually care about: What's the chatbots' accuracy on benchmark tests, e.g. BigBench, MMLU? Can the chatbot pass a law exam, or a medical exam? Can the chatbot write Python code that actually matches the specification? Can the chatbot perform worthwhi...

The Nonlinear Library
AF - MetaAI: less is less for alignment. by Cleo Nardo

The Nonlinear Library

Play Episode Listen Later Jun 13, 2023 11:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MetaAI: less is less for alignment., published by Cleo Nardo on June 13, 2023 on The AI Alignment Forum. Summary In May 2023, MetaAI submitted a paper to arxiv called LIMA: Less Is More for Alignment. It's a pretty bad paper and (in my opinion) straightforwardly misleading. Let's get into it. The Superficial Alignment Hypothesis The authors present an interesting hypothesis about LLMs We define the Superficial Alignment Hypothesis: A model's knowledge and capabilities are learnt almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users. If this hypothesis is correct, and alignment is largely about learning style, then a corollary of the Superficial Alignment Hypothesis is that one could sufficiently tune a pretrained language model with a rather small set of examples. We hypothesize that alignment can be a simple process where the model learns the style or format for interacting with users, to expose the knowledge and capabilities that were already acquired during pretraining. (1) This hypothesis would have profound implications for AI x-risk It suggests that we could build a safe competent oracle by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of safe competent responses. It suggests that we could build an alignment researcher by pretraining an LLM on the entire internet corpus, and then finetuning the LLM on a curated dataset of alignment research. (2) Moreover, as by Ulisse Mini writes in their review of the LIMA paper, Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! (3) Finally, the hypothesis would've supported many of the intuitions in the Simulators sequence by Janus, and I share these intuitions. So I was pretty excited to read the paper! Unfortunately, the LIMA results were unimpressive upon inspection. MetaAI's experiment The authors finetune MetaAI's 65B parameter LLaMa language model on 1000 curated prompts and responses (mostly from StackExchange, wikiHow, and Reddit), and then compare it to five other LLMs (Alpaca 65B, DaVinci003, Bard, Claude, GPT4). Method: To compare LIMA to other models, we generate a single response for each test prompt. We then ask crowd workers to compare LIMA outputs to each of the baselines and label which one they prefer. We repeat this experiment, replacing human crowd workers with GPT-4, finding similar agreement levels. Results: In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Conclusion: The fact that simple fine-tuning over so few examples is enough to compete with the state of the art strongly supports the Superficial Alignment Hypothesis, as it demonstrates the power of pretraining and its relative importance over large-scale instruction tuning and reinforcement learning approaches. Problems with their experiment (1) Human evaluators To compare two chatbots A and B, you could ask humans whether they prefer A's response to B's response across 300 test prompts. But this is pretty bad proxy, because here's what users actually care about: What's the chatbots' accuracy on benchmark tests, e.g. BigBench, MMLU? Can the chatbot pass a law exam, or a medical exam? Can the chatbot write Python code that actually matches the specification? Can the chatbot per...

The Nonlinear Library
AF - LIMA: Less Is More for Alignment by Ulisse Mini

The Nonlinear Library

Play Episode Listen Later May 30, 2023 2:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LIMA: Less Is More for Alignment, published by Ulisse Mini on May 30, 2023 on The AI Alignment Forum. Abstract Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output. Implications Data Quality & Capabilities Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! Perplexity != Quality When fine-tuning LIMA, we observe that perplexity on held-out Stack Exchange data (2,000 examples) negatively correlates with the model's ability to produce quality responses. To quantify this manual observation, we evaluate model generations using ChatGPT, following the methodology described in Section 5. Figure 9 shows that as perplexity rises with more training steps – which is typically a negative sign that the model is overfitting – so does the quality of generations increase. Lacking an intrinsic evaluation method, we thus resort to manual checkpoint selection using a small 50-example validation set. Because of this, the authors manually select checkpoints between the 5th and 10th epochs (out of 15) using the held-out 50-example development set. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Garrett Ashley Mullet Show
Foppery or Theodicy Relative Biblical Attitudes Regarding Masculinity and Femininity

The Garrett Ashley Mullet Show

Play Episode Listen Later Jan 14, 2023 110:25


"Wives, submit to your own husbands, as to the Lord. For the husband is the head of the wife even as Christ is the head of the church, his body, and is himself its Savior. Now as the church submits to Christ, so also wives should submit in everything to their husbands. Husbands, love your wives, as Christ loved the church and gave himself up for her, that he might sanctify her, having cleansed her by the washing of water with the word, so that he might present the church to himself in splendor, without spot or wrinkle or any such thing, that she might be holy and without blemish. In the same way husbands should love their wives as their own bodies. He who loves his wife loves himself. For no one ever hated his own flesh, but nourishes and cherishes it, just as Christ does the church, because we are members of his body. “Therefore a man shall leave his father and mother and hold fast to his wife, and the two shall become one flesh.” This mystery is profound, and I am saying that it refers to Christ and the church. However, let each one of you love his wife as himself, and let the wife see that she respects her husband." - Ephesians 5:22-33 "Or do you not know that the unrighteous will not inherit the kingdom of God? Do not be deceived: neither the sexually immoral, nor idolaters, nor adulterers, nor men who practice homosexuality, nor thieves, nor the greedy, nor drunkards, nor revilers, nor swindlers will inherit the kingdom of God. And such were some of you. But you were washed, you were sanctified, you were justified in the name of the Lord Jesus Christ and by the Spirit of our God." - 1 Corinthians 6:9-11 This Episode's Links: It's Good to Be a Man: A Handbook for Godly Masculinity - Michael Foster, Dominic Bnonn Tennant How do Safe Haven Baby Boxes work? - WTHR, YouTube Were Christians in the Roman Empire known to rescue abandoned babies? - Christianity.StackExchange.com Gwyneth Paltrow Warns Katy Perry That Kids Can Be Difficult On A Marriage; Perry Pushes Back - Katie Jerkovich, The Daily Wire What does the Bible say about fornication? - GotQuestions.org What Is Adultery? The Biblical Definition and Consequences - Christianity.com Polygyny - Wikipedia Child Marriage in the United States and Its Association With Mental Health in Women --- Send in a voice message: https://podcasters.spotify.com/pod/show/garrett-ashley-mullet/message

Boston Computation Club
12/03/22: Depths of Wikipedia with Annie Rauwerda

Boston Computation Club

Play Episode Listen Later Dec 3, 2022 59:10


Annie Rauwerda is an internet personality and polymath with a background in neuroscience and data science. She is also the host and operator of Depths of Wikipedia, a phenomenally popular meme page, Depths of Wikipedia, which you can read about HERE on Wikipedia. Annie is also herself a frequent Wikipedia editor and author. Today she joined us to talk about how Wikipedia can be charming, funny, and informative, all at once. She showed us a variety of charming examples of Wikipedia in all its niche internet glory, and then answered a metric ton of questions about Wikipedia, the internet, Stack Exchange, etc. This was a super fun event and one we really enjoyed. We hope you enjoy it too!

Lexman Artificial
Locative Technology with Jeff Atwood (Bloconomics)

Lexman Artificial

Play Episode Listen Later Oct 23, 2022 3:05


In this episode, Lexman interviews Jeff Atwood, the author of "Bloconomics" and co-founder of StackExchange. They discuss locative technology, microdots, and bourgeon mushrooms.

Game Dev Arena
Arena 3.37 - Indie Game Dev Communities - GameDev StackExchange

Game Dev Arena

Play Episode Listen Later Oct 14, 2022 6:33


Checking out our final Indie Game Dev Community that was recommended by AskGamDev off their YouTube channel. Check out my Social Media  Twitter - https://twitter.com/vigmu2  Tumblr - https://meedajoe0417.tumblr.com/  Discord - https://discord.gg/AYEAK5RmFR  If you would like to donate for my current work and for further content!  You can donate here -- https://bit.ly/3ea8q3u  Provide thoughts on show and join email list for show notifications: https://bit.ly/3hGNqEP --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/vigmu2-games/message Support this podcast: https://anchor.fm/vigmu2-games/support

The Chaincode Podcast
Sergei Tikhomirov and Lightning privacy - Episode 19

The Chaincode Podcast

Play Episode Listen Later Feb 17, 2022 60:30


Postdoc Researcher Sergei joins Murch and Jonas to talk about channel balance probing in Lightning, privacy concerns in general, and the importance of researcher-developer collaboration. We discuss: - Sergei's background (1:50) - Sergei's homepage with links to all prior research - Lightning basics (2:50) - Why LN payments fail (3:40) - Why privacy is important (5:30) - Privacy potential of Lightning vs L1 Bitcoin (6:40) - How probing works (8:40) - Why is balance discovery bad? (11:30) - Persistent identities in Lightning (13:00) - Multi-vector security model and trade-offs (17:45) - "Twitter for your bank account" meme (20:20) - The danger of overestimating Bitcoin's privacy (21:00) - Lightning integrations and walled gardens (22:00) - Lightning Service Providers and LN's centralized topology (23:05) - LNBIG booth in El Salvador (25:30) - Potential oligopoly of large nodes (27:15) - Probing parallel channels (28:30) - Analysis and Probing of Parallel Channels paper - Combining probing with jamming (33:00) - The limit on in-flight payments (36:00) - StackExchange answer about transaction size limit - Bad and good probing (41:20) - Countermeasures and reputation (44:00)Overview of anti-jamming measures - Hub-and-spoke terminology and aviation analogy (49:00) - Doing research in Bitcoin and Lightning (53:10) - Why Bitcoin is unique (55:10) - Researcher-developer collaboration (58:00) Related research: - On the Difficulty... -- the first paper about LN balance probing - An Empirical Analysis paper about three LN attack vectors including probing - Counting Down Thunder paper about timing attacks - Congestion Attacks paper about jamming - Cross-layer Deanonymization paper about linking L1 and L2 - Flood & Loot paper about malicious fee negotiation strategies - Hijacking Routes paper about adversarial fee undercutting Thanks to Justin for the sound engineering.

Voice of the DBA
The Battlefield of Your Career

Voice of the DBA

Play Episode Listen Later Feb 14, 2022 2:49


Jeff Atwood has had quite a bit of success in his career, having been a founder of StackExchange and Discourse. He's a fellow alumnus of the University of Virginia, along with Alexis Ohanian. Both of them have had a higher profile than me, and I admire what they've accomplished. I also think they've both been advocates for the technology industry, helping and advising others on how they can succeed in their own careers. Jeff has been writing interesting posts about hardware and software for years, but the latest one struck me. It's called Learning on the Battlefield, and it has a lot of the same advice that I recently gave someone. In the post, Jeff notes that approaching your software career is like learning on the battlefield. It's like making weapons and coming up with tactics. You really need to test them on the battlefield. Read the rest of The Battlefield of Your Career

The Stack Overflow Podcast
Professional ethics and phantom braking

The Stack Overflow Podcast

Play Episode Listen Later Jan 4, 2022 20:27


Hear why Ben thinks the Workplace Stack Exchange and the Academia Stack Exchange have the richest questions in the Stack Exchange network (or maybe just the most sitcom-worthy).ICYMI: Jack Dorsey stepped down from Twitter. Will he be back?At Twitter, Tess Rinearson is leading a new team focused on crypto, blockchains, and decentralized tech. Follow her on Twitter here.The team winces over a review of a Tesla Model Y hatchback that describes phantom braking so frequent and so dangerous that it's “a complete deal-breaker.”If you're a fan of our show, consider leaving us a rating and a review on Apple Podcasts.

The Stack Overflow Podcast
Professional ethics and phantom braking

The Stack Overflow Podcast

Play Episode Listen Later Jan 4, 2022 20:27


Hear why Ben thinks the Workplace Stack Exchange and the Academia Stack Exchange have the richest questions in the Stack Exchange network (or maybe just the most sitcom-worthy).ICYMI: Jack Dorsey stepped down from Twitter. Will he be back?At Twitter, Tess Rinearson is leading a new team focused on crypto, blockchains, and decentralized tech. Follow her on Twitter here.The team winces over a review of a Tesla Model Y hatchback that describes phantom braking so frequent and so dangerous that it's “a complete deal-breaker.”If you're a fan of our show, consider leaving us a rating and a review on Apple Podcasts.

Core Sampler
Episode 156: Sitecore Stack Exchange graduates from Beta

Core Sampler

Play Episode Listen Later Dec 16, 2021 12:37


In this episode, we sit down with Mark Cassidy and discuss Sitecore Stack Exchange graduating from Beta.

Salesforce Developer Podcast
108: Apex Development with Adrian Larson

Salesforce Developer Podcast

Play Episode Listen Later Nov 15, 2021 33:47


Adrian Larson is a Salesforce developer over at Counsel. Today we talk with him about his insights into Apex development.   Throughout our conversation, we also get into some of Adrian's earliest experiences including working with Python and what the transition from there to Java was like. We have a great discussion about StackExchange and optimizations as well. Tune in and learn from Adrian's years and depth of experience.   Show Highlights:   What Adrian's current role looks like. How he got involved in StackExchange. What constitutes a good answer on StackExchange. How Adrian became a moderator for Salesforce. What a fluent pattern is and what it can help with. How Adrian and his team use micro-optimizations. What fluent query and fluent syntax mean. How Adrian utilizes dependency injection for performance increase. Why he works out an API first and then writes rigorous tests for it. The benefits of having a trigger handler class.   Links: Adrian on LinkedIn: https://www.linkedin.com/in/adrian-larson-a17a856/ Adrian on Twitter: https://twitter.com/apexlarson Adrian on GitHub: https://github.com/apexlarson Adrian on StackExchange: https://salesforce.stackexchange.com/users/2995/adrian-larson

Crossing Borders with Nathan Lustig
Greatest Hits Episode: Christian Van der Henst: Helping Latin America Learn with Platzi, Ep 155

Crossing Borders with Nathan Lustig

Play Episode Listen Later Nov 3, 2021 47:40


Greatest Hits Episode: Christian Van der Henst: Helping Latin America Learn with Platzi, Ep 155​​For this week's episode of Crossing Borders, we're revisiting one of our greatest hits episodes featuring Platzi's Christian Van der Henst.How did a curious young web developer from Guatemala become one of the first Latin American entrepreneurs to enter YCombinator? Christian Van der Henst fell in love with the internet in the 90s when he realized he could use it as a tool to communicate with the whole world. He knew he wanted to share his knowledge with people and collaborate with a global tech community long before Latin America's tech revolution even started.Christian is a lifelong entrepreneur, but he didn't realize it until he was studying his Masters in Barcelona while running a massive online platform, Maestros del Web, a proto-Stack Exchange for Latin America, at night. He eventually put his passion for education into Platzi, alongside Colombian co-founder Freddy Vega, and helped grow the company to US$3M in yearly revenue in just four years. In this episode, Christian talks about how he transitioned from Maestros del Web to Mejorando.la (before they rebranded to Platzi), how Platzi became the first startup serving Latinos to enter YCombinator, and why entrepreneurship is so important in Latin America right now.A curious web developer – with 10 million monthly page viewsMaestros del Web, Christian's first real business, started as a pet project for Christian to communicate with and learn from other web developers around the world. In the early days of the internet, the few portals that existed to help web developers solve problems were all in English, and Christian wanted a solution for his local community, too.While the site was a huge success, Christian could barely manage his four person team remotely as he studied for his Master's in Barcelona. He had accidentally built a huge community online – and realized he loved the feeling of bringing people together to talk about tech. Learn how Christian jumped from Maestros del Web to Platzi by befriending a competitor and starting to dream bigger.From 100 audience members to 1200 viewersPlatzi started out as an in person course, until Freddy and Christian realized that almost every attendee (including themselves!) was traveling to join the class. Platzi quickly migrated online, and the first online course had more than twice as many students as the co-founders expected. The server crashed and the first streaming was a failure, says Christian, but the audience was forgiving.Why? Transparency has been a key to Platzi's success from the start. The founders explained their tech problems to their users, fixed them, then kept providing classes. In this episode of Crossing Borders, learn why Christian thinks failing the first live stream made Platzi stronger and more resilient.If you really want something, keep tryingPlatzi was rejected from YCombinator the first time they applied. Christian almost didn't apply again, until a mentor mentioned that he should give it another shot. Platzi is now known for being the first fully Latin American company to enter the accelerator.Christian thinks that every startup should apply for YC, just as an exercise to find out if what they are doing is worth building. He encourages Latino founders to ask for what they want (a tip he's learned from working in San Francisco); in Silicon Valley, trying hard and failing is better than never trying at all. That's how Platzi has reached US$3M in revenue a year, and still offers free courses on YouTube.Platzi has already become an internationally-recognized brand for online education as they help the Spanish-speaking market become digitally native. The platform is an example of what it means to listen to your local community and build a product for them before launching it globally. Christian Van der Henst is passionate about building entrepreneurial communities in Latin America, and Platzi is just one of the tools helping local entrepreneurs get started across the Spanish-speaking world.Outline of this episode:[1:54] – Splitting time between Platzi offices in Bogota, San Francisco, and Mexico City[2:31] – What does Platzi do?[3:46] – Were you always an entrepreneur?[4:57] – How did you get started with Maestros del Web?[5:56] – Growing up in Guatemala[8:40] – How Christian fell in love with the internet at a young age[11:26] – Going from Guatemala to Barcelona[12:37] – What were some of the biggest challenges you had to overcome going from 0 to 10M page views?[15:42] – What was like going from being a solo founder to teaming up with somebody?[16:55] – Why Mejorando.la went from live classes to streaming and YouTube[18:24] – Why mistakes are good for learning[19:39] – Why Latino founders should learn to put themselves out there and stop fearing failure[22:53] – Rebranding: Your users care more about product than name[24:15] – Why they targeted the Spanish-speaking market[25:23] – What was it like to enter YCombinator as the first fully Latin American company?[27:30] – Biggest surprises participating in YCombinator[29:19] – Why Colombia is leading the way for international investment in LatAm[31:17] – When did you know Platzi would grow to be something big?[34:25] – Why keep an office in San Francisco?[36:24] – What advice would you give to other Latin American founders when they are trying to raise money?[38:02] – Why should more US VCs be looking at companies from LatAm?[40:26] – Why fintech is so important in LatAm[42:53] – What's your advice for people that want to start learning to code?[44:05] – If you could go back to when you were first starting Mejorando.la, what advice would you give yourself?This episode of Crossing Borders is brought to you by AWS Startups.AWS Startups supports entrepreneurs in Latin America across multiple programs, including Cloud Credits to help startups test features and extend runway, technical support to help optimize AWS solutions and integrations with your product, and – on the business side – help you build strategic contacts with investment funds, accelerators, and corporations to accelerate your growth.For more information, check out aws.amazon.com/es/campaigns/founders, where you can access $1,350 in AWS credits for your startup.

Salesforce Developer Podcast
106: FFLib and Apex Design Patterns with Eric Kintzer

Salesforce Developer Podcast

Play Episode Listen Later Nov 1, 2021 35:50


Eric Kintzer is a Salesforce Architect over at Helix. Today I talk with him about FFLib and Apex design patterns. We discuss how his discovery of Andrew Fawcett's book on the topic changed his entire perspective on architecture and development.   Eric describes himself as an all-singing, all-dancing Salesforce developer, architect, and admin. He has had to identify business needs, architect around them and then develop the solution himself. As a result, he has a very interesting journey and unique insights to bring. Tune in to learn more from him.   Show Highlights:   Eric's Salesforce journey. His approach to Apex and other complex programs. How Andrew Fawcett's design architecture book changed Eric's career and life. The four key design patterns in that book. How enterprise patterns impact testing.  What a mock object is. How the FFLib project works and its many advantages. How to migrate to FFLib well. Why FFLib is important if you care about your craft as a developer. How Eric got involved with Stack Exchange.   Links: Eric on Twitter Eric on LinkedIn Eric Kintzer SFDC Blog Salesforce Lightning Enterprise Platform Architecture book by Andrew Fawcett  apex-enterprise-patterns (aka FFLib) GitHub repo Apex Enterprise Patterns Trailmix Salesforce Developer Survey (through Nov 30th, 2021)

Sparking Faith Podcast
Psalm 23 – Fri – 21-10-08

Sparking Faith Podcast

Play Episode Listen Later Oct 8, 2021 2:00


The 23rd Psalm pictures God as a shepherd. We are the sheep under his care. However, the imagery seems to shift in verse 5, which reads, “Thou preparest a table before me in the presence of mine enemies: thou anointest my head with oil; my cup runneth over.” Sheep do not eat at tables, nor drink from cups. So what is this verse telling us about the care of God when we depend on him like sheep depend on a shepherd? The picture of the table is that of a king preparing a banquet for a guest.* The king's generosity to the guest is witnessed by the enemies, who can only watch. Anointing the head with oil was done as a special honor to a guest at a feast. The olive oil was probably spiced to have a pleasing aroma. Jesus mentions this practice in Luke 6:42. The overflowing cup was also a symbol of generous hospitality. The host of the banquet lavished drink on the guest. God's care for us is not simply providing the basics. He doesn't just provide our daily needs, guide us in righteousness and protect us from evil. God is so extravagant! He treats us like honored guests at a sumptuous banquet! It seems to echo Ephesians 1:8 which says God lavished his grace upon us. Don't just see God as the shepherd, see him as the king pouring out his goodness on us! *"Psalm 23, '...table in the presence of my enemies…'" StackExchange, edited June 17, 2020, https://hermeneutics.stackexchange.com/questions/15552/psalm-23-table-in-the-presence-of-my-enemies. Please provide feedback and suggestions at: https://www.sparkingfaith.com/feedback/ Bumper music “Landing Place” performed by Mark July, used under license from Shutterstock.

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2021-09-15)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Sep 15, 2021 56:32


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/[00:00:00​] Intro[00:01:17​] Terraform AWS EC2 Client VPN Module releasedhttps://github.com/cloudposse/terraform-aws-ec2-client-vpn[00:01:54​] OMIGOD! Azure RCE: “Secret” Agent Exposes To Unauthorized Code Executionhttps://www.wiz.io/blog/secret-agent-exposes-azure-customers-to-unauthorized-code-execution[00:04:04​] New OWASP Top 10 for 2021 (Open Web Application Security Project)https://owasp.org/Top10/[00:04:50​] GitHub CLI now supports extensions!https://github.blog/2021-08-24-github-cli-2-0-includes-extensions/[00:07:20​] Custom widgets for CloudWatch dashboardshttps://aws.amazon.com/about-aws/whats-new/2021/08/custom-widgets-amazon-cloudwatch-dashboards/[00:07:46] ElastiCache for Redis now supports auto scalinghttps://aws.amazon.com/about-aws/whats-new/2021/08/amazon-elasticache-redis/[00:08:09​] AWS CloudFormation Can Retry Stack Operations from the Point of Failurehttps://aws.amazon.com/blogs/aws/new-for-aws-cloudformation-quickly-retry-stack-operations-from-the-point-of-failure/[00:08:51​] Amazon Elasticsearch Service Is Now Amazon OpenSearch Servicehttps://aws.amazon.com/blogs/aws/amazon-elasticsearch-service-is-now-amazon-opensearch-service-and-supports-opensearch-10/[00:24:55​] Anyone using Stack Exchange for teams?[00:28:35​] Terraform Cloud Alternatives?[00:36:15​] How to implement maintenance pages and activate them?[00:43:10​] Does anyone use a span trace viewer as a primary view into a local development environment? (e.g. honeycomb UI, Perfetto)[00:49:15​] Any best practices for organizing your TF configs for different environments, but keeping common variable settings in just one place?[00:52:55​] Nomad for application CD [00:55:27​] Outro#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show (https://cloudposse.com/office-hours/)

Increments
#31 - The Fall of the Weinstein Republic

Increments

Play Episode Listen Later Sep 14, 2021 54:51


Today we take your twitter questions before doing a deep dive into the Weinstein fiasco (Bret and Eric, not Harvey.) If you haven't heard of the Weinstein's before, then we suggest you run away before we drag you down into a rabbit hole filled with acronyms, anti-vaxxers, and theories of ... everything? anything? literally anything at all? Topics we touch: - We take your twitter questions! - Filos with a weird one: (https://twitter.com/iamFilos/status/1424025239370047488) I have a weird one that could be fun. It seems to me that the idea that we could upload our minds to a computer is nonsense. I agree with Kastrup that what we would upload is a description of our minds and a description of something is not that something. And it seems this desire to immortality is the nerd's reinvention of God via AGI, and heaven via uploading a mind to a silicon substrate. Where do you fall in this mind uploading fantasy? possible? Religious impulse? Reasonable? - Dan would like us to talk about: (https://twitter.com/danieljhageman/status/1424008345309126660) The pervasive skepticism that seems to run through much the Popperian and Crit Rat communities regarding nonhuman animals' capacity to suffer, particularly factory farmed animals. - Karl is interested in: (https://twitter.com/krlwlzn/status/1424025137481912330) I'm interested in the meta-question of why that issue seems to split the community in two. Why hasn't one view become the dogmatic truth yet as it seems to have in most other communities? - WTF is up with Bret and Eric Weinstein - The allure of reflexive contrarianism - The (horrible! awful! stop it!) tendency of academics to use convoluted language to impress their non-peers - The notion of "secular gurus" and what distinguishes a secular guru from a person with a large platform - And the special responsibility of researchers to communicate clearly. References: Animal Suffering - Bruce Nielson's blog post (https://fourstrands.org/2021/04/15/do-animals-experience-qualia/) on whether animals experience qualia, and his second (https://fourstrands.org/2021/06/08/the-current-science-of-animal-emotions/) on animal emotions. We mostly discuss the first. Weinsteins - Eric Weinstein's excellent first appearance (https://samharris.org/podcasts/faith-in-reason/) on Sam Harris's podcast - Geometric Unity website (https://geometricunity.org/) - Geometric Unity pdf (https://geometricunity.nyc3.digitaloceanspaces.com/Geometric_Unity-Draft-April-1st-2021.pdf) - See Timothy Nguyen on the Wright Show (https://www.youtube.com/watch?v=j86WIfRfPDk&ab_channel=Bloggingheads.tv) and Decoding the Gurus (https://decoding-the-gurus.captivate.fm/episode/special-episode-interview-with-tim-nguyen-on-geometric-unity) for an excellent overview of the whole scandal - ... and check out Timothy Nguyen on Eigenbros (https://www.youtube.com/watch?v=o31cGMENDTI&ab_channel=Eigenbros) for a deep dive into the technical nitty-gritty - Norbert Blum's original paper (https://arxiv.org/pdf/1708.03486v1.pdf) purporting to show that P is not equal to NP. - A nice answer (https://cstheory.stackexchange.com/questions/38803/is-norbert-blums-2017-proof-that-p-ne-np-correct) on Stack Exchange detailing why Blum's proof was wrong. Quotes: Every intellectual has a very special responsibility. He has the privilege and the opportunity of studying. In return, he owes it to his fellow men (or 'to society') to represent the results of his study as simply, clearly and modestly as he can. The worst thing that intellectuals can do - the cardinal sin - is to try to set themselves up as great prophets vis-à-vis their fellow men and to impress them with puzzling philosophies. Anyone who cannot speak simply and clearly should say nothing and continue to work until he can do so. Karl Popper, Against Big Words (http://www.the-rathouse.com/shortreviews/Against_Big_Words.pdf) What would you say to your half million twitter followers who want to know your opinion on everything? Tell us at incrementspodcast@gmail.com.

JACK BOSMA
JACKBOSMA: StackExchange

JACK BOSMA

Play Episode Listen Later Sep 5, 2021 2:15


https://stackexchange.com/sites and https://stackexchange.com/users/9500157/jack-bosma #stackexchange #skills #learning --- Send in a voice message: https://anchor.fm/jack-bosma3/message Support this podcast: https://anchor.fm/jack-bosma3/support

yegor256 podcast
Shift-M/48: Jeff Atwood about knowledge management in software teams

yegor256 podcast

Play Episode Listen Later Aug 23, 2021 58:07


Jeff Atwood is an American software developer, author, blogger, and entrepreneur. He writes the computer programming blog Coding Horror. He co-founded the computer programming question-and-answer website Stack Overflow and co-founded Stack Exchange, which extends Stack Overflow's question-and-answer model to subjects other than programming. Jeff's blog: https://www.codinghorror.com

Business of Software Podcast
Ep 73 The Developer's Guide To Running Sales Teams (with Jeff Szczepanski)

Business of Software Podcast

Play Episode Listen Later Jun 29, 2021 61:36


Jeff ‘Tall Jeff' Szczepanski from Stack Exchange delivers a talk on his experiences of what happens when you think about and manage sales like you would manage a great development team. Don't forget to sign up for the BoS Newsletter and keep up to date with everything going on at BoS. Visit businessofsoftware.org and click Subscribe! --- Send in a voice message: https://anchor.fm/business-of-software/message

Die Weisheit anderer Leute
Selig der Schoss

Die Weisheit anderer Leute

Play Episode Listen Later May 24, 2021 10:33


Interlinear Lukas 11 (griechisch/englisch) Jimmy Akin über die Passage (Blog, englisch) Stackexchange über die Passage (Forum, englisch) Beda Venerabilis (Wikipedia) Catena Aurea: Lukas 11 (englisch) Augustinus: De virginitate (englisch) Chrysostomus: 44. Homilie über Matthäus

Devchat.tv Master Feed
Changes in the JAMstack Landscape with Sean C Davis - JSJ 482

Devchat.tv Master Feed

Play Episode Listen Later May 4, 2021 63:58


Dan kicks the show off by asking our guest Sean C. Davis to define for us what doesn't fall under JAMstack. Sean explains what isn't JAMstack and then dives into what's changed over the last year or so that brings us to the tools and approaches that hybridize the server end of things to bring more server side to the JAMstack. So, JAMstack lifts away from a monolithic backend to provide an independent front-end with a supporting set of back-end tools rather than a back-end with supporting front-end tools. This episodes dives into the implications of this approach as a reaction to the more traditional monolith. Panel AJ O'Neal Dan Shappir Guest Sean C Davis Sponsors Dev Influencers Accelerator Raygun | Click here to get started on your free 14-day trial JavaScript Error and Performance Monitoring | Sentry Links Comparing Static Site Generator Build Times | CSS-Tricks Grouparoo: Open Source Synchronization  Framework Unmute Your Story | Unmute Picks AJ-  Follow Beyond Code | Facebook AJ- Twitter: Beyond Code Bootcamp ( @_beyondcode ) AJ- vim-essentials | webinstall.dev AJ- StackExchange AJ- Stack Overflow: The Architecture - 2016 Edition AJ- Comparing Static Site Generator Build Times | CSS-Tricks AJ- Digital Ocean ($100 or 60 Days Free) Dan- How Wix improved website performance by evolving their infrastructure Dan- Who has the fastest F1 website in 2021? Part 1 Sean- Free JavaScript Resources Sean- Ted Lasso 

panel landscape framework f1 ted lasso digital ocean copy paste jamstack days free referral program stack exchange dev influencers accelerator raygun click sean c davis performance monitoring sentry
JavaScript Jabber
Changes in the JAMstack Landscape with Sean C Davis - JSJ 482

JavaScript Jabber

Play Episode Listen Later May 4, 2021 63:58


Dan kicks the show off by asking our guest Sean C. Davis to define for us what doesn't fall under JAMstack. Sean explains what isn't JAMstack and then dives into what's changed over the last year or so that brings us to the tools and approaches that hybridize the server end of things to bring more server side to the JAMstack. So, JAMstack lifts away from a monolithic backend to provide an independent front-end with a supporting set of back-end tools rather than a back-end with supporting front-end tools. This episodes dives into the implications of this approach as a reaction to the more traditional monolith. Panel AJ O'Neal Dan Shappir Guest Sean C Davis Sponsors Dev Influencers Accelerator Raygun | Click here to get started on your free 14-day trial JavaScript Error and Performance Monitoring | Sentry Links Comparing Static Site Generator Build Times | CSS-Tricks Grouparoo: Open Source Synchronization  Framework Unmute Your Story | Unmute Picks AJ-  Follow Beyond Code | Facebook AJ- Twitter: Beyond Code Bootcamp ( @_beyondcode ) AJ- vim-essentials | webinstall.dev AJ- StackExchange AJ- Stack Overflow: The Architecture - 2016 Edition AJ- Comparing Static Site Generator Build Times | CSS-Tricks AJ- Digital Ocean ($100 or 60 Days Free) Dan- How Wix improved website performance by evolving their infrastructure Dan- Who has the fastest F1 website in 2021? Part 1 Sean- Free JavaScript Resources Sean- Ted Lasso 

panel landscape framework f1 ted lasso digital ocean copy paste jamstack days free referral program stack exchange dev influencers accelerator raygun click sean c davis performance monitoring sentry
All JavaScript Podcasts by Devchat.tv
Changes in the JAMstack Landscape with Sean C Davis - JSJ 482

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later May 4, 2021 63:58


Dan kicks the show off by asking our guest Sean C. Davis to define for us what doesn't fall under JAMstack. Sean explains what isn't JAMstack and then dives into what's changed over the last year or so that brings us to the tools and approaches that hybridize the server end of things to bring more server side to the JAMstack. So, JAMstack lifts away from a monolithic backend to provide an independent front-end with a supporting set of back-end tools rather than a back-end with supporting front-end tools. This episodes dives into the implications of this approach as a reaction to the more traditional monolith. Panel AJ O'Neal Dan Shappir Guest Sean C Davis Sponsors Dev Influencers Accelerator Raygun | Click here to get started on your free 14-day trial JavaScript Error and Performance Monitoring | Sentry Links Comparing Static Site Generator Build Times | CSS-Tricks Grouparoo: Open Source Synchronization  Framework Unmute Your Story | Unmute Picks AJ-  Follow Beyond Code | Facebook AJ- Twitter: Beyond Code Bootcamp ( @_beyondcode ) AJ- vim-essentials | webinstall.dev AJ- StackExchange AJ- Stack Overflow: The Architecture - 2016 Edition AJ- Comparing Static Site Generator Build Times | CSS-Tricks AJ- Digital Ocean ($100 or 60 Days Free) Dan- How Wix improved website performance by evolving their infrastructure Dan- Who has the fastest F1 website in 2021? Part 1 Sean- Free JavaScript Resources Sean- Ted Lasso 

panel landscape framework f1 ted lasso digital ocean copy paste jamstack days free referral program stack exchange dev influencers accelerator raygun click sean c davis performance monitoring sentry
mixxio — podcast diario de tecnología
Hacker bueno, hacker malo

mixxio — podcast diario de tecnología

Play Episode Listen Later Apr 22, 2021 12:22


 Signal luchará activamente contra la extracción de datos de Cellebrite. El fundador de Signal recibe un equipo forense de Cellebrite, usado para extraer datos de iPhone y Android, descubre en el software múltiples vulnerabilidades y anuncia que empezarán a incrustar ficheros que interfieran con ese análisis.  La historia tiene mucha miga, la comentaré en el podcast.  Filtran datos personales de 5 millones de españoles tras hackear The Phone House. Los atacantes llevan unos días amenazando a la empresa con publicar los datos, incluyendo cuentas bancarias, emails, documentos de identidad, números de teléfono, e incluso dirección y nombre. Los datos ya están dando vueltas.  Están cargados en HIBP, así que podéis poner vuestro número de teléfono o email y ver si sale The Phone House en la lista de filtraciones.  Queda por ver qué decide la Agencia de Protección de Datos, y de momento los afectados podrían iniciar un procedimiento civil.  Stack Overflow revela cuántas veces copiamos código de sus páginas. Durante las últimas semanas, el código de las webs de Stack Exchange ha incluido una porción de Javascript que analizaba cuándo y dónde los visitantes ejecutaban el comando de copia de contenido, y nos cuentan qué código se copia más, y por quién.  Analizamos todo lo presentado por Apple el lunes en el nuevo episodio de Cupertino, nuestro podcast semanal sobre Apple. Hay muchas cosas curiosas sobre los AirTags y los nuevos iMac que la compañía no comentó durante la presentación que deberíais saber antes de comprarlos.  Apple planea expandir su negocio de publicidad. Ahora que las nuevas restricciones de iOS harán más difícil a empresas tecnológicas que no sean Apple recoger datos, Apple va a ofrecer nuevos servicios publicitarios a empresas, con más anuncios en la App Store, y llevarse ellos el dinero.  Básicamente, lo que Mark Zuckerberg decía en este caso es verdad: que Apple ha cambiado las normas de privacidad ahora porque estaba a punto de lanzar más herramientas propias de publicidad.  Los programas con interfaz gráfica de Linux ya corren Windows con las nuevas versiones de WSL, que están disponibles en las versiones de prueba de Windows 10. Un lujazo que permitirá tener aplicaciones de ambos sistemas integradas de forma completa sin complicaciones ni emulaciones.  Linux pilla a la Universidad de Minnesota enviando parches con bugs para el Kernel. Aparentemente estaban haciendo un trabajo de campo que consistía en enviar este tipo de mejoras con fallos de software creados a propósito para evaluar cual era el mecanismo de revisión. Los encargados no están nada contentos y han prohibido contribuir a toda la institución.  Excelente resumen sobre la situación a largo plazo de las fábricas de baterías. EE.UU. solo tiene tres fábricas, Europa algunas más, pero la inmensa mayoría siguen en China. Se corre el riesgo de que se repitan los mismos errores que con el suministro de petróleo, dependiendo de una serie de países más o menos autocráticos para su suministro, y volvamos a la casilla de salida.  Ni RISC-V ni MIPS, en China busca otra arquitectura de procesadores desde cero. LoongArch en China ha anunciado una nueva arquitectura para sus procesadores, en vez de usar RISC-V o MIPS, el mayor estándar libre internacional, porque no pueden irse con ARM o x86. Será interesante ver si en unos años, surge alto de todo este dinero invertido en reinventar la rueda por motivos legales.

Einundzwanzig - Der Bitcoin Podcast
Interview #30 - Mempool, Taproot und Bitcoin Stack Exchange mit Murch

Einundzwanzig - Der Bitcoin Podcast

Play Episode Listen Later Mar 1, 2021 94:12


Interview Murch - Blockzeit 672009 - von und mit Murch, Markus und Dennis. Themen Mempool (Johoe's Bitcoin Mempool Website) Taproot Aktivierung (Bitcoin Magazine Artikel zu LOT: Ben Kaufman, Aaron van Wirdum) Bitcoin Stack Exchange Besuche unsere Website. Diskutiere mit in unserer Community. Verfolge die neusten Schlagzeilen im Newsfeed. Für 50.000 sats kannst du uns ein Shoutout da lassen.

The Investor Mindset - Real Estate Show
E184: The Subtle Art - Mark Manson

The Investor Mindset - Real Estate Show

Play Episode Listen Later Jan 14, 2021 29:41


This week I'm very lucky to be joined by an amazing guest and someone that I personally admire for their exceptional and unique work. It's none other than Mark Manson - the best selling author of the incredible book “The Subtle Art of Not Giving a Fuck”. In this episode Mark takes us through how he started as a writer and what inspired him to write his smash hit book. We also go deep into what his book and philosophy really means… because it's not as simple as the title might suggest. We go through what is worth investing our time and energy into, “The Attention Diet”, how to set boundaries for yourself and much more. Mark has been published and/or featured in over 50 of the biggest newspapers, magazines, and television/radio shows on the planet, including NBC, CNN, Fox News, the BBC, Time Magazine, Larry King, Dr. Oz, New York Times, New York Post, USA Today, Buzzfeed, Vice, and Vox, among many others.Men's Health called him, “unnervingly well-read” and the Sunday Times described my writing as, “the local drunk who spent too much time in the philosophy section of the bookstore.” (he took it as a compliment.)Mark has had the great fortune to speak to some of the most successful and innovative companies on the planet, including Google, Microsoft, Blackstone, Stack Exchange, Xero and LinkedIn, among others. He's also been a guest lecturer at a number of universities.But before Mark was an author, he was a blogger. He started a blog in 2009 and within a few years it was being read by more than a million people each month. Today, this site is read by more than 15 million people each year and in 2015, he was one of the first brands to launch a paid subscription model that has since been adopted by most of the online publishing industry. Mark has never hosted ads and never will. Hit subscribe, join the community, and dive into this amazing episode with one of the most sought after and inspirational authors on the planet right now. Have you read Mark's book and put it into practice? Tell us in the comments below. KEY TAKEAWAYS1. The quality of your life is determined by the quality of the things that you choose to care about.2. With all the information available to us via the internet and social media, it makes it confusing to know what is worth investing our time and energy into.3. “The Attention Diet”: Like being conscious of what food we put into our bodies, we need to be conscious of what information we put into our minds. If we indulge in too much crap then it can make us physically AND mentally obease. 4. Life is short and we only have enough time to become excellent at one or two things. It's important to get clear on what these are, focus on your strengths and not on the nonsense online, or in life, that isn't going to get you anywhere.   5. The people who are most free right now are the people who know how to set boundaries for themselves and limit technology and information. 6. “Who you are is defined by what you're willing to struggle for.” BOOKSThe Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life - https://amzn.to/2Xv3D5b  The Passive Investing Playbook - https://theinvestormindset.com/passive LINKSMark Manson - www.markmanson.netLearn more about our fund opportunities - https://www.vonfinch.com/fund  Learn more about investing with Steven at https://theinvestormindset.com/investJoin the MultiFamilyMBA and get exclusive free training: https://theinvestormindset.com/mfmba

Poor Man's Podcast
Poor Man's Answers Questions

Poor Man's Podcast

Play Episode Listen Later Aug 5, 2020 34:39


Why do we always put the word barrel in these descriptions? Why do we insist on making these recordings publicly accessible? Why are all of our recordings so terribly balanced? We answer none of these questions in this week's episode, but we do try our hand at answering some questions from Stack Exchange instead. Outro Song: Something Elated by Broke for Free

Last Week in .NET
August 1, 2020 - .NET Foundation: Friend or Foe?

Last Week in .NET

Play Episode Listen Later Aug 3, 2020 29:34


VB.NET "Not along for the ride" in .NET Core and .NET 5. Eject Mailman, eject.For those of you that were hoping for VB.NET to get some love in .NET 5, it doesn't look like it's going to happen. This is of course causing some consternation; but overall I get it. Visual Basic was written for a time when we really thought we could make a language look like english and not be laughed out of the room. Now we know better. VB.NET has done good things; and I know a few products even today that are still written in VB.NET; but look, it's time.Just look at the flowers, VB.NET.Visual Studio 2019 version 16.7 Preview 6 is now availableMost of us are probably on the Visual Studio stable channel, but if you like to get the previews (they're free), you can install them. Interesting to me is that this version adds support for XCode 11.6? I don't even know what this means but here we are and that sounds cool as $#&@.Microsoft .NET team is hiringYou can apply to become a Program Manager II on the .NET team. I thought about applying, but realized "allowing everyone to be their authentic selves" probably doesn't mean "Making fun of Microsoft on a daily basis". Seriously though, if you can move to Redmond, you should think about applying. .NET is entering its best years; and Microsoft is one of the better companies to work for.Microsoft's Roslyn team (the compiler for .NET) released a blogpost detailing productivity improvements:The Roslyn team released a new blog post detailing tooling fixes that are in Visual Studio 2019 16.6 that you may have missed.My favorites are the DateTime formatting changes. You no longer have to Google which combination of MMDDYYYY gets you what you want; they now provide that information in the intellisense when you use DateTime.ToString(). This is a long overdue feature and I'm glad they added it. Their code refactorings are getting better, though I still prefer Jetbrains Resharper..NET Foundation "State of the Foundation"The .NET Foundation released its State of the Foundation report for 2020. They have 800 members, which is a growth of 100% from last year, and 5 corporate sponsors, as well as its plan for the coming year. I'm glad to see this sort of transparency; and while I have some reservations about the .NET Foundation; this is a step in the right direction.They also released their budget; and this will get better, but they spent a grand total of $558 dollars on sponsorships this year. You'd hope to see that get much better, and that's the metric I'll be using to judge whether or not they're having the right impact on the .NET community.Stack Overflow infographic:Stack Overflow (the company) released its performance metrics for its collection of Q&A sites on stackexchange.com (What the company used to be named, but then realized that was a terrible name and changed to the same namesake as its flagship Q&A site). So anyway, if you want to know how 300+ Stack Exchanges perform, you'll want to see this.The sheer speed of the Stack Exchange network got the Hacker News folks all in a tizzy. Any day we can tout how well .NET performs and piss off hacker news is a good day..NET Conf - "Focus on Microservices".NET Conf held an all day conference to talk microservices; and I live tweeted it. I've got some pretty nasty scars (And a few fond memories) of working with Microservices; and if that sort of thing interests you, check my live thread on it. If your architect is practicing Resume driven development or you work with really large software teams, you should watch the videos with interest; for the rest of us, the conference probably isn't worth your time unless you really want to learn about some frameworks that can help you build Microservices in .NET.Pretty Fricking Cool Library of the Week (PFCLotW)This week's cool library is Bogus, which allows you to generate fake data for your application. It's a pretty neat library; and you should check it out. I've used it on quite a few occasions, and it's worth your time.In today's podcast episode; I'm diving deeper into what the .NET Foundation is, and whether it's "good for us" as a community in its current form. The episode should drop by Noon EDT (-4 UTC) today; so give it a listen if that's a subject that interests you.Transcript (Powered by otter.ai)George Stocker  0:00  Hi, I'm George Stocker, and welcome to last weekend dotnet. Vb dotnet is not along for the ride in dotnet core and dotnet five. Now for those of you who are hoping to get VB dotnet in dotnet, five, it doesn't look like it's going to happen. So of course, it's going to cause some consternation among VB dotnet developers, and I get it. Visual Basic was written for a time where we thought we could really make a language look like English and not be left out of the room. Now we know better. dB dotnet has done good things. And I know a few products today, they're still written in VB dotnet. But look, it's time Visual Studio 2019 version 16.7. Preview six is now available. Now this is pretty cool. You can actually get advanced versions of Visual Studio whatever the next minor version is, you can get advanced versions of it for free without a license, their preview and so they might have bugs in them, but you want to check out what's coming up in Visual Studio. It's always an interesting install. Now this one is interesting to me because it adds support for Xcode 11.6 I really don't know what this means. But I want to find out because this is really cool. Microsoft dotnet team is hiring, you can actually apply to become a program manager for the dotnet team at Microsoft, I thought about applying, but then realize that allowing everyone to be their authentic selves probably doesn't mean making fun of Microsoft on daily basis. Seriously, though, if you can move to Redmond, you should think about applying dotnet is is entering into its best years. And Microsoft really is one of the better large companies to work for Microsoft's rozlyn team. That's the team that produces the compiler for dotnet. They released a blog post about productivity improvements and their latest push for Roslyn. Now, this was in 16.6. So you may have missed it. It's been out for a few weeks. But what I just noticed is that they've added changes that allow you to see how your date time is going to be formatted when you say date, time to string You have all those options, they now give you IntelliSense for those options, and they tell you what they mean, that's wonderful. It's way long overdue. There are other code refactorings. For this, I still prefer JetBrains resharper. But again, something you should take a look at the dotnet foundation released its state of the foundation blog post for 2020. Now, they this year, they have 800 members, which is 100% growth from last year. And they now have five corporate sponsors. This state of the foundation also includes their upcoming plan. I'm pretty glad to see the sort of transparency, I do have some reservations about the dotnet foundation. I do believe that publishing this is a step in the right direction. They also release their budget, and this will get better but they spent a grand total of $558. In sponsorships this year. you'd hope to see that get much higher if it actually means what I think it means which is sponsoring open source projects. And that's a metric I'm going to be using to judge whether or not they're having the right impact on the dotnet community but you have to start somewhere, and they started at $558 worth of somewhere. StackOverflow released its performance metrics for its Stack Exchange sites on Stack Exchange calm now the company's called Stack Overflow used to be called Stack Exchange. The network is still called Stack Exchange. But the company changed its name back to its flagship site, which is Stack Overflow. Anyway, if you want to know how well the site's perform, you can check out the link at Stack Exchange comm slash performance and the sheer speed of the Stack Exchange network being hosted on dotnet. They got the Hacker News folks all upset and any day we can see how well dotnet performs and piss off Hacker News. That's a good day. dotnet con held their focus on microservices Virtual Conference on July 30. And I have a thread live tweeting it. Now I've got some pretty nasty scars and some fun memories from working with microservices and that sort of thing. interest you, you can check out my life thread on it. Now if your architect is practicing resume driven development, or you work with really large software teams, you should check out the videos from the conference. But for the rest of us, probably not worth your time, unless you want to learn about some of the frameworks that help you build microservices and dotnet. Now, this week's cool library is bogus. Now, it's a library that allows you to generate fake data for your application. It's pretty cool. And you should check it out if you need to generate fake data. One of the common usages that I use it for is if we need to mock data as if it were coming from production. For instance, we need a million rows of data, but we can't use production data. Use bogus, generate it that way. Job done. Alright, as part of today's episode, we're going to talk about the dotnet foundation. And that may seem a little boring, but I promise you it's not it's actually really important for you, for me and for everybody who is part of the dotnet community, the dotnet foundation was formed to advance the interests of the dotnet programming community, including enterprises partners, individual developers and open source communities by fostering open development and collaboration of open source technologies for dotnet programming and related technologies, and by serving as a forum for commercial and community developers to strengthen the future of the dotnet ecosystem, and wider developer community by promoting openness, community participation and rapid innovation. Now if that sounded, we'll can that's because it was that comes directly from the dotnet foundations bylaws, Article One, section three. Now the reason why we're talking about the dotnet foundation is that how its governed and how we interact with it determine how successful dotnet open source is, Will dotnet open source be successful because of the foundation or in spite of the foundation, and if you've been developing in dotnet for a long time, you understand that Microsoft is Really a late comer to the open source movement. Now the foundation was formed in 2014. And it was formed much the same way that the Apache foundation or the eclipse foundation were formed, they're around technology stack, in this case dotnet and to advance the interests of the dotnet community. Now when we say advanced the interest dotnet community got to put an Asterix there. I mean, Microsoft created the dotnet ecosystem. Microsoft's developer division has tons of tooling around dotnet they've put millions and millions of dollars into developing dotnet into what it is, and you can't expect them just to let that go and just to be governed by a foundation. And of course, it's it's not they, they're a founding member, and as such, they get certain rights in the foundation that no one else gets. For instance, in an article two, Section four under founding member, Microsoft Corporation is the founding member, the founding member, and 10s have the right to manage the affairs Foundation, be vested exclusively in the board as described in these bylaws to the maximum extent permitted by applicable law, the founding member and eligible members will elect the board as described in Section 3.3. That's article three, section three. Now the board will consist of one director appointed by the founding member and up to six directors elected by the membership. Now that's important, no matter what Microsoft gets one spot on the board, okay, the membership elects the other six, in fact, not the other six up to six. Now the other rights the founding member gets Microsoft in this case, the director who is appointed by them is going to serve until that person is replaced by Microsoft or otherwise vacates the position. The founding member Microsoft may replace its appointed director anytime as in its sole discretion. Elected directors will serve for the term established in the director election policy found Remember, they get to change their person out whenever they want. Now that's something we need to be aware of. Now the current executive director of the dotnet foundation is Claire Novotny. Claire is the dotnet foundation executive director. And she works at Microsoft as a program manager on the dotnet team. And this is very important. If the dotnet if the foundation is independent, then clearly any any actions taken by Microsoft would be seen as well. It's not an independent foundation. And so right now clear is the executive director. And as of yet, there's not been a non Microsoft executive director that I know of. Now Microsoft has other rights. For instance, under article three section nine meetings, subsection II limited special right for director appointed by my founding member. This is Microsoft remember, in connection with any vote to materially change the foundations of membership policy director election policy, project governance policy, or any intellectual property related agreements or policies, a no vote by the director appointed by the founding member will result in the disapproval of the proposed action, regardless of the number of votes for approval, and such director must be present as part of any quorum, ie if that directors not present, the board will not have a quorum for the matter, regardless of the number of other directors present. So this is important. Microsoft effectively controls how the dotnet foundation is set up and how it's run. You can't change policy if Microsoft doesn't agree to it. That's a very interesting way to set it up if you want it to be independent foundation. Now under Article nine amendments,any amendment of the articles of incorporation or the bylaws must be approved by vote of two thirds of the directors then in office, any such amendment that materially alters risk? or eliminates the rights responsibilities and privileges of the founding member must be agreed to in writing by an authorized representative of the founding member who is not serving as the director of the foundation. Now, this is interesting. You've got this special person that the founding member appoints. And they can't even vote to make changes. Someone else from the founding member has to approve these changes like amendments. Now, why does all this matter? Like why is this political intrigue, even important? Now all of this is important because the dotnet Foundation was set up to help dotnet open source thrive. Now it only thrives if we do what's best for community. We do things that aren't best for the community, it's not going to do as well. dotnet foundation supposed to do that. It's supposed to take into account how the community feels and conduct itself in a way that helps the community thrive. For instance, they have a vision statement. vision statement proposed vision statement is that a diverse, healthy and active open source community, open source software community or project maintainers are well supported and contributors feel welcome, an ecosystem where dotnet open source software is adopted in the enterprise, education and personal projects, and ecosystem are the foundation its members in the world wide dotnet open source software ecosystem work together to identify challenges to the mission, and then collaborate on solutions. A community where those that benefit the most from dotnet open source software contribute back whether it be through resources, time or money in this community is easy for anyone who wishes to contribute to do so in whatever way they can. should be easy for companies to contribute financially to open source software, and easy for project maintainers to receive that support. That's the vision statement they're proposing to change right now. That's the proposed instead of the vision statement. Now the mission statement is the dotnet foundation is an independent A nonprofit organization whose mission is to support an innovative, diverse, commercial friendly, international open source ecosystem for the.net platform. That is their mission statement. Now with everything we've gone through so far, we've gone through their bylaws, we've gone through how they're set up, they have six, up to six directors plus someone appointed by Microsoft. But they also have one other part, which is an advisory council. This Advisory Council consists of six people that work at Microsoft and one that does not also people that run the foundation. They have a treasurer who works at Microsoft, Christopher house, who works at Microsoft, but doesn't have his stated title. And they have Claire, who is the executive director of the dotnet foundation. They have that they then have their board of directors of which it looks like none of their board of directors, except for one except for Beth Massi is a member of Microsoft. So extensively right It's pretty independent, except for the fact that Microsoft appoints the Microsoft appointed director, they will always be able to appoint a director, they can replace that director anytime at their discretion. And that director cannot make decisions that will materially hurt Microsoft. And Microsoft has effectively veto power over anything that changes how the dotnet foundation is run. And then they have an advisory council. It's made up largely of people from Microsoft. So even if someone wants to make a change, you're going to the Advisory Council is going to be there. And you know, this doesn't look so good for Microsoft, please don't do it. But the reason all of this came up is that I believe in the dotnet Foundation, I believe in the idea of making open source software work. I think that right now, open source software won't work. It can't work. It's not financially viable for maintainers. It leads to burn out. It leads to abandoned projects, and generally creates more churn in a system and when you create churn, especially in software, companies don't want to use that software. And I think that you know, creating a foundation whose job it is to help keep that churn down. I think that's, that's a good thing to do. However, open source software has to have the needs of its community at heart. And a foundation that represents open source software has to have the needs of its community at heart. Now recently in in, it was reported back in May, that Microsoft copied its new wind get window Pam, its new wind get package manager, architecturally from apt get, which was a dotnet, open source software package manager. They copied how it worked. They copied its ideas. And if that weren't bad enough,it turns out they'd called Kevin and said, Hey, Kevin, can you come out interview with us? We like what you're doing with aapka they interviewed him, they ghosted him and then the night before build They call him up to say, hey, oh, by the way, we're not going with your app get project, we're going to go our own way. And yet it's being announced tomorrow and build. The next day they announced wind get. Now by itself, this behavior is bad. But this is Microsoft. Aren't they are big supporters of dotnet. Open Source, didn't they establish a foundation just for this? Well, I asked him that question to the foundation to its directors. And the response I received was not our deal. No one asked us for help. We're staying out of it. Is that behavior keeping your, your community's needs in mind? I don't think so. And so I dug some more digging, I was like, well, this, this can't This doesn't make sense. Like why would anyone stay silent. You've you've literally got a dotnet project that's popular, that is filling on a hole that Windows hasn't provided a system level package manager That's pretty dang well. And why is it nobody at the dotnet foundation is speaking out about this. There's some reports from some people, the dotnet Foundation, when I really pressed them that said, you know, hey, if they were a member, we might have stepped in. But since their project isn't on our list of projects, we don't, we don't deal with them. That's not a good enough answer. If your foundation is there, to improve dotnet open source software adoption, you're not just improving it for the projects that are part of your portfolio. You need to improve it for all of them. You're the interest group for dotnet open source software, that's what you do. So again, I was a little heated. And so I started doing more research into the dotnet foundation. That's when I found all the stuff I'm telling you about. I have also been telling people to Hey, you should become a member, you should join the foundation, and you should vote and i believe i believe all those things. And one of the questions I asked is that you know, what does commercially friendly commercial friendliness me back from the mission statement? And the answer I got was telling. And it's actually what led me to speak on this podcast about it today. And the answer I got is the intent is that businesses are able to use dotnet based open source software libraries without friction. Clean IP and licensing is is a key part of that, which is why the foundation has project signup contribution agreement, and a seal a bot for for future contributions that ensures that no one's going to come out of the woodwork, the copyright claim on the code. It also means the use of permissive license licenses, which is one reason that foundation does not support libraries with copyleft licenses. It currently does not say anything about a project's commercial viral viability, nor for sponsors that the foundation of which Microsoft is just one. And that was from clear. The Executive Director, Ben Adams, who is a paid director on the foundation said it's both if a project is not sustainable, then it's not commercial friendly and the dotnet foundation should help enable business to give back to projects they use in a commercial friendly way. As business purchasing can be a complicated internal system and a common barrier for all projects that the dotnet foundation should endeavor to ease. Also, the dotnet foundation does not support non permissive libraries for its license, excuse me, non permissive licenses for its libraries, as they are hard to build on are using a commercial friendly way. Now, this is important, basically dotnet Foundation, if you're producing open source library, dotnet foundation wants Greece's kids good businesses to use it. So if you produce, let's say, a library that does image compression, if you want to be a part of the dotnet Foundation, you can't use a copyleft license like GPL. If you want to be part of the foundation for them to care about you, you got to use permissive license like the Apache License or MIT license. Now if you're an application dotnet open source application, you're allowed, although I haven't seen verbiage to that you're allowed to use a non permissive license. Now, why is all this important? Well, if you're an open source project, and you're a library, I don't see how the foundation is going to make what you do commercially viable for you. We're gonna make it commercially viable for businesses by saying no, you may not use GPL or a GPL. But you may use MIT license and the Apache License, but for applications, they'll help you. They'll be okay with a non permissive license, at least as I understand what they've said here. It's a hell of a way to slice it. Alright, since the bylaws don't cover everything, we have jumped intothe project's policy. The project's policy allows you to determine what projects can be members of the dotnet Foundation, and do they meet the health criteria is important. So let's start with eligibility. Now they're eligible if they fit within the moral and ethical standards for the dotnet Foundation, it's good if the project is aligned with the philosophy and guidelines for collaborative development also good. And it's built on the dotnet platform, or it creates value within the dotnet ecosystem. It's eligible if it produces so source code for distribution to the public at no charge. That's interesting. The license is operated under a is offered under an open source license, which has been approved by the dotnet foundation. And libraries that are mandatory dependencies of the project are also under offered under a standard permissive open source library, which has been approved by the dotnet foundation. Now all of these are and there's more criteria, but those are the most interesting ones. If you decide you want to put your project under the dotnet Foundation, you have two choices. You can either a assign your project, to the dotnet foundation that's transferring the copyright of your project to the dotnet Foundation, or B. You can use the contribution model which is you retain, or the project retains ownership of the copyright, but they grant the dotnet foundation abroad license the project's code and enter in other IP. Now, why is all this important? Why do we need to care about such esoteric documents? And it's because if you ever want to know what a business cares about, look at what they write down. They put a lot of effort into these governing documents. Microsoft put a lot of effort in being sure they couldn't be kicked down. They also put a lot of effort into ensuring that they, their rights were always protected with effectively veto power over any decision that changes how dotnet foundation runs. The foundation itself is set up to ensure that companies can easily use open source projects, they can easily rely on them, but you're missing a leg. And we see that with what happened with Kevin and aapka. What about the project mean? Tanner's, where do they come in? Sure they get a seal a bot, that makes it easier for people to contribute changes their projects. Okay? That's a solved problem. And they get pixel space on the dotnet Foundation website, but only if they're members. Something like AppGet, something that was materially important to the dotnet community because it showed that you could use dotnet to create something as foundational system package manager, have it be popular, and they get nothing, because they weren't a member. And even if they were a member, it's not like Microsoft say, Oh, yeah, you're right. Gosh, we shouldn't have competed with open source project are bad. They didn't do that. Microsoft, you know, after an outcry finally gave keivan credit, but if they used his architectural work, his design work that's worth 7500 k from consulting, just by developer time alone, your developer team, you have them spend Two months figuring out the architecture of the system, what his design will be how its API's work, that's easily worth 75 or 100. k. What did Kevin get? Well, he got a footnote read me Two months later. And that's the sort of thing that I thought the dotnet Foundation was supposed to protect against. But as I find out, they're not, you know, they're there to grease the skids for companies, protecting projects is a distant second to that. Now, that, of course, may not be the desire that may not be what they're trying to do. But it's the impact. And it sure seems like the dotnet foundation is set up in such a way that it's there to enrich Microsoft, even if it hurts the community. And so let's look at their budget what they do this year. Now currently, they released their state of the foundation this week. They have five corporate sponsors. They have 800 members and their budget. They brought in 237,000 sent out expenses of 157,004 2020 ending July, or excuse me, ending June 2020. In their budget, they had sponsorships of $558. And outreach of $81,517 goal of outreach is to encourage new developers to build dotnet empower underrepresented segments of the coder community, become leaders and contributors and assist event organizers with evangelism and grow.So for their budget, they spent 81,000 on outreach, only $558 on sponsorships. Now it's unclear how much of their money went to open source projects. I can't tell that just by looking at their balance sheet. There's no line, hey, this we're outlays that we actually contributed to projects with but remember, you know what people write down they care about where is the goals for give Many open source projects, I don't see it. And this means that they don't care about open source or that, you know, the dotnet foundation just exists to enrich Microsoft. But it does raise some interesting questions at this point. What we need for open source in the dotnet community is we need open source to not be plagued by burnout to not be plagued by companies stealing the work. You know, I don't even say that we have, we do have a list of problems and done and open source. And you know, how easy it is to get companies to adopt open source. It's even on my top five. You know, it's hard to get people to maintain projects, you know, authors, like even get their work stolen for no money, no credit. It took the community outcry to even get a footnote on the readme file. Microsoft continually competes with the community and maintainers don't have the backing up an interest group that can help us that's what the dotnet Foundation's there for There'd be the backing for the maintainers there to be the special interest group for people that make open source software with dotnet. Yes, they should grease, grease the skids for businesses to use open source software. Absolutely. But they should do it in such a way that enriches the community, not a project sponsor, not their founding member, the community. So here we are. We're at the start of a new fiscal year for the dotnet foundation. We're having new directors Come on. And I want to challenge the directors that join the foundation to figure out who are they therefore, are they there to enrich the founding member to make it easier for them? Or are they there to enrich the community? And if you aren't there to enrich the community? Then we got to start focusing on making dotnet open source software sustainable, and yes, that means putting money in the pocket of maintainers Open source software is a labor of love. You have to love what you're doing. But love doesn't pay the bills. Love doesn't put a roof over your head. These companies have plenty of capital. We need an interest group, like the dotnet foundation to put that capital to work for us. Now, how can we do it? One issue is that we should have dual licensing. And the dotnet foundation should look at dual licensing. If you're an open source project, you get one license, if you're commercial, you got to pay and you should pay. You're making money or you're using the software to make money in your business or to save you money. You should pay for that right if you're a business dinette foundation can help by putting together an invoicing system by saying, look, we have lawyers, you pay dues, those dues go to lawyers to figure out do licensing your dotnet project, they will figure out the license and you don't have to the next thing we'll do. So we'll set up an invoicing system to make it as easy as possible for open source projects under the dotnet foundation to have to generate invoice for business so the business can business's purchasing department can pay them. The next thing we will do as dotnet foundation is that we will fight tooth and nail for dotnet open source, there should be no one that questions whether dotnet foundation exists to enrich the community and seeks to defend the community from companies that would try to take and give back. And that means at some point, members of the dotnet Foundation and the directors of the dotnet foundation have to stand up to factions within Microsoft do just that. This is not the first time that I Microsoft team has taken something from open source. It's only the latest time and it's gonna happen again. That's almost a certainty. I want dotnet open source software to succeed I believe it needs to succeed. We're not in a closed source world anymore. But for it to succeed. It's got to be financially viable. For the maintainers, the people that put their hearts and their souls into creating these libraries and these frameworks that we use. And the only way that's going to happen is if the interest group we have the dotnet foundation puts all of its effort towards making that the goal. Now this incredibly depressing podcast, of course, is brought to you by myself, George Stocker. And I help teams double their productivitythrough test driven development. You can reach out to me at www.doubleyourproductivity.io.Transcribed by https://otter.ai

Nice Games Club
"You can see the cabbage in it." Video Game Immersion; Main Menus and First Impressions [Nice Replay]

Nice Games Club

Play Episode Listen Later Dec 11, 2018


#42 "You can see the cabbage in it."Roundtable 2017.08.30 It's a slightly experimental show this week, as we try to see what a two-topic episode looks like. Also, Martha defends her love of coleslaw, Stephen still doesn't really like VR, and Mark has almost completely lost his voice.Discuss this week's episode on Reddit in r/gamedev here. Video Game Immersion 0:05:54 Stephen McGregorGamingOur pal and VR innovator Andrew Fladeboe.Representation and Embodiment in Virtual Reality - Jorge Albor, PopMattersWould You Kindly Read This Article on Gaming's Greatest Plot Twist? - Mike Diver, ViceOur recent episode of Nice Plays: OneShot - Nice Games Club, YouTubeEnhancing VR Immersion with the CPU in Star Trek: Bridge Crew - Cristiano F, Intel DevelopersCognitive Flow: The Psychology of Great Game Design - Sean Baron, Game DeveloperWhy Sonic the Hedgehog is awful, and always has been - Ryan Brown, MIrrorRequiem, a “roleplaying” mod for The Elder Scrolls: SkyrimMark is up to 118 shrines in Breath of the Wild. Stephen is not happy. Main Menus and First Impressions 0:39:27 Mark LaCroixUI / UXWhy do console games require a button press before showing the main menu? - StackExchangeGame Design: Splash Screen - Jesse Freeman, MediumThe Ten Commandments Of Video Game Menus - Kirk Hamilton, KotakuWe didn't get a chance to talk about it in the episode, but here is a cool brea… - Mike Fahey, Kotaku

BSD Now
225: The one true OS

BSD Now

Play Episode Listen Later Dec 20, 2017 107:06


TrueOS stable 17.12 is out, we have an OpenBSD workstation guide for you, learnings from the PDP-11, FreeBSD 2017 Releng recap and Duo SSH. This episode was brought to you by Headlines TrueOS stable release 17.12 (https://www.trueos.org/blog/trueos-17-12-release/) We are pleased to announce a new release of the 6-month STABLE version of TrueOS! This release cycle focused on lots of cleanup and stabilization of the distinguishing features of TrueOS: OpenRC, boot speed, removable-device management, SysAdm API integrations, Lumina improvements, and more. We have also been working quite a bit on the server offering of TrueOS, and are pleased to provide new text-based server images with support for Virtualization systems such as bhyve! This allows for simple server deployments which also take advantage of the TrueOS improvements to FreeBSD such as: Sane service management and status reporting with OpenRC Reliable, non-interactive system update mechanism with fail-safe boot environment support. Graphical management of remote TrueOS servers through SysAdm (also provides a reliable API for administrating systems remotely). LibreSSL for all base SSL support. Base system managed via packages (allows for additional fine-tuning). Base system is smaller due to the removal of the old GCC version in base. Any compiler and/or version may be installed and used via packages as desired. Support for newer graphics drivers and chipsets (graphics, networking, wifi, and more) TrueOS Version 17.12 (2017, December) is now available for download from the TrueOS website. Both the STABLE and UNSTABLE package repositories have also been updated in-sync with each other, so current users only need to follow the prompts about updating their system to run the new release. We are also pleased to announce the availability of TrueOS Sponsorships! If you would like to help contribute to the project financially we now have the ability to accept both one-time donations as well as recurring monthly donations which wil help us advocate for TrueOS around the world. Thank you all for using and supporting TrueOS! Notable Changes: Over 1100 OpenRC services have been created for 3rd-party packages. This should ensure the functionality of nearly all available 3rd-party packages that install/use their own services. The OpenRC services for FreeBSD itself have been overhauled, resulting in significantly shorter boot times. Separate install images for desktops and servers (server image uses a text/console installer) Bhyve support for TrueOS Server Install FreeBSD base is synced with 12.0-CURRENT as of December 4th, 2017 (Github commit: 209d01f) FreeBSD ports tree is synced as of November 30th (pre-FLAVOR changes) Lumina Desktop has been updated/developed from 1.3.0 to 1.4.1 PCDM now supports multiple simultaneous graphical sessions Removable devices are now managed through the “automounter” service. Devices are “announced” as available to the system via *.desktop shortcuts in /media. These shortcuts also contain a variety of optional “Actions” that may be performed on the device. Devices are only mounted while they are being used (such as when browsing via the command line or a file manager). Devices are automatically unmounted as soon as they stop being accessed. Integrated support for all major filesystems (UFS, EXT, FAT, NTFS, ExFAT, etc..) NOTE: The Lumina desktop is the only one which supports this functionality at the present time. The TrueOS update system has moved to an “active” update backend. This means that the user will need to actually start the update process by clicking the “Update Now” button in SysAdm, Lumina, or PCDM (as well as the command-line option). The staging of the update files is still performed automatically by default but this (and many other options) can be easily changed in the “Update Manager” settings as desired. Known Errata: [VirtualBox] Running FreeBSD within a VirtualBox VM is known to occasionally receive non-existent mouse clicks – particularly when using a scroll wheel or two-finger scroll. Quick Links: TrueOS Forums (https://discourse.trueos.org/) TrueOS Bugs (https://github.com/trueos/trueos-core/issues) TrueOS Handbook (https://www.trueos.org/handbook/trueos.html) TrueOS Community Chat on Telegram (https://t.me/TrueOSCommunity) *** OpenBSD Workstation Guide (https://begriffs.com/posts/2017-05-17-linux-workstation-guide.html) Design Goals User actions should complete instantaneously. While I understand if compiling code and rendering videos takes time, opening programs and moving windows should have no observable delay. The system should use minimalist tools. Corollary: cache data offline when possible. Everything from OpenStreetMaps to StackExchange can be stored locally. No reason to repeatedly hit the internet to query them. This also improves privacy because the initial download is indiscriminate and doesn't reveal personal queries or patterns of computer activity. No idling program should use a perceptible amount of CPU. Why does CalendarAgent on my Macbook sometimes use 150% CPU for fifteen minutes? Who knows. Why are background ChromeHelpers chugging along at upper-single-digit CPU? I didn't realize that holding a rendered DOM could be so challenging. Avoid interpreted languages, web-based desktop apps, and JavaScript garbage. There, I said it. Take your Electron apps with you to /dev/null! Stability. Old fashioned programs on a conservative OS on quality mainstream hardware. There are enough challenges to tackle without a bleeding edge system being one of them. Delegate to quality hardware components. Why use a janky ncurses software audio mixer when you can use…an actual audio mixer? Hardware privacy. No cameras or microphones that I can't physically disconnect. Also real hardware protection for cryptographic keys. Software privacy. Commercial software and operating systems have gotten so terrible about this. I even catch Mac command line tools trying to call Google Analytics. Sorry homebrew, your cute emojis don't make up for the surveillance. The Hardware Core To get the best hardware for the money I'm opting for a desktop computer. Haven't had one since the early 2000s and it feels anachronistic, but it will outperform a laptop of similar cost. After much searching, I found the HP Z240 Tower Workstation. It's no-nonsense and supports exactly the customizations I was looking for: No operating system pre-loaded (Cut out the “Windows tax”) Intel Xeon E3-1270 v6 processor (Supports ECC ram) 16 GB (2x8 GB) DDR4-2400 ECC Unbuffered memory (2400Mhz is the full memory clock speed supported by the Xeon) 256 GB HP Z Turbo Drive G2 PCIe SSD (Uses NVMe rather than SATA for faster throughput, supported by nvme(4)) No graphics card (We'll add our own) Intel® Ethernet I210-T1 PCIe (Supported by em(4)) A modest discrete video card will enable 2D Glamor acceleration on X11. The Radeon HD 6450 (sold separately) is fanless and listed as supported by radeon(4). Why build a solid computer and not protect it? Externally, the APC BR1300G UPS will protect the system from power surges and abrupt shutdowns. Peripherals The Matias Ergo Pro uses mechanical switches for that old fashioned clicky sound. It also includes dedicated buttons along the side for copying and pasting. Why is that cool? Well, it improves secondary selection, a technique that Sun computers used but time forgot. Since we're talking about a home office workstation, you may want a printer. The higher quality printers speak PostScript and PDF natively. Unix machines connect to them on TCP port 9100 and send PostScript commands directly. (You can print via telnet if you know the commands!) The Brother HL-L5100DN is a duplex LaserJet which allows that “raw” TCP printing. Audio/Video I know a lot of people enjoy surrounding themselves with a wall of monitors like they're in the heart of NASA Mission Control, but I find multi-monitor setups slightly disorienting. It introduces an extra bit of cognitive overhead to determine which monitor is for what exactly. That's why I'd go with a modest, crisp Dell UltraSharp 24" U2417H. It's 1080p and yeah there are 4k monitors nowadays, but text and icons are small enough as it is for me! If I ever considered a second monitor it would be e-ink for comfortably reading electronic copies of books or long articles. The price is currently too high to justify the purchase, but the most promising monitor seems to be the Dasung Paperlike. In the other direction, video input, it's more flexible to use a general-purpose HDMI capture box like the Rongyuxuan than settle on a particular webcam. This allows hooking up a real camera, or any other video device. Although the motherboard for this system has built-in audio, we should use a card with better OpenBSD support. The WBTUO PCIe card uses a C-Media CMI8768 chipset, handled by cmpci(4). The card provides S/PDIFF in and out ports if you ever want to use an external DAC or ADC. The way to connect it with other things is with a dedicated hardware mixer. The Behringer Xenyx 802 has all the connections needed, and the ability to route audio to and from the computer and a variety of devices at once. The mixer may seem an odd peripheral, but I want to mix the computer with an old fashioned CD player, ham radio gear, and amplifier so this unifies the audio setup. When doing remote pair programming or video team meetings it's nice to have a quality microphone. The best ones for this kind of work are directional, with a cardioid reception pattern. The MXL 770 condenser mic is perfect, and uses a powered XLR connection supplied by the mixer. Backups We're going dead simple and old-school, back to tapes. There are a set of tape standards called LTO-n. As n increases the tape capacity gets bigger, but the tape drive gets more expensive. In my opinion the best balance these days for the home user is LTO-3. You can usually find an HP Ultrium 960 LTO-3 on eBay for 150 dollars. The cartridges hold 800GB and are about 15 dollars apiece. Hard drives keep coming down in price, but these tapes are very cheap and simpler than keeping a bunch of disk drives. Also tape has proven longevity, and good recoverability. To use old fashioned tech like this you need a SCSI host bus adapter like the Adaptec 29320LPE, supported by ahd(4). Cryptography You don't want to generate and store secret keys on a general purpose network attached computer. The attack surface is a mile wide. Generating or manipulating “offline” secret keys needs to happen on a separate computer with no network access. Little boards like the Raspberry Pi would be good except they use ARM processors (incompatible with Tails OS) and have wifi. The JaguarBoard is a small x86 machine with no wireless capability. Just switch the keyboard and monitor over to this machine for your “cleanroom.” jaguar board: Generating keys requires entropy. The Linux kernel on Tails samples system properties to generate randomness, but why not help it out with a dedicated true random number generator (TRNG)? Bit Babbler supplies pure randomness at a high bitrate through USB. (OneRNG works better on the OpenBSD main system, via uonerng(4).) bit babbler: This little computer will save its results onto a OpenPGP Smartcard V2.1. This card provides write-only access to keys, and computes cryptographic primitives internally to sign and encrypt messages. To use it with a regular computer, hook up a Cherry ST2000 card reader. This reader has a PIN pad built in, so no keylogger on the main computer could even obtain your decryption PIN. The Software We take the beefed up hardware above and pair it with ninja-fast software written in C. Some text-based, others raw X11 graphical apps unencumbered by ties to any specific window manager. I'd advise OpenBSD for the underlying operating system, not a Linux. OpenBSD has greater internal consistency, their man pages are impeccable, and they make it a priority to prune old code to keep the system minimal. What Have We Learned from the PDP-11? (https://dave.cheney.net/2017/12/04/what-have-we-learned-from-the-pdp-11) The paper I have chosen tonight is a retrospective on a computer design. It is one of a series of papers by Gordon Bell, and various co-authors, spanning the design, growth, and eventual replacement of the companies iconic line of PDP-11 mini computers. This year represents the 60th anniversary of the founding of the company that produced the PDP-11. It is also 40 years since this paper was written, so I thought it would be entertaining to review Bell's retrospective through the lens of our own 20/20 hindsight. To set the scene for this paper, first we should talk a little about the company that produced the PDP-11, the Digital Equipment Corporation of Maynard, Massachusetts. Better known as DEC. It's also worth noting that the name PDP is an acronym for “Programmed Data Processor”, as at the time, computers had a reputation of being large, complicated, and expensive machines, and DEC's venture capitalists would not support them if they built a “computer” A computer is not solely determined by its architecture; it reflects the technological, economic, and human aspects of the environment in which it was designed and built. […] The finished computer is a product of the total design environment. “Right from the get go, Bell is letting us know that the success of any computer project is not abstractly building the best computer but building the right computer, and that takes context.” It is the nature of computer engineering to be goal-oriented, with pressure to produce deliverable products. It is therefore difficult to plan for an extensive lifetime. Because of the open nature of the PDP-11, anything which interpreted the instructions according to the processor specification, was a PDP-11, so there had been a rush within DEC, once it was clear that the PDP-11 market was heating up, to build implementations; you had different groups building fast, expensive ones and cost reduced slower ones The first weakness of minicomputers was their limited addressing capability. The biggest (and most common) mistake that can be made in a computer design is that of not providing enough address bits for memory addressing and management. A second weakness of minicomputers was their tendency not to have enough registers. This was corrected for the PDP-11 by providing eight 16-bit registers. Later, six 32-bit registers were added for floating-point arithmetic. […] More registers would increase the multiprogramming context switch time and confuse the user. “It's also interesting to note Bell's concern that additional registers would confuse the user. In the early 1970's the assumption that the machine would be programmed directly in assembly was still the prevailing mindset.” A third weakness of minicomputers was their lack of hardware stack capability. In the PDP-11, this was solved with the autoincrement/autodecrement addressing mechanism. This solution is unique to the PDP-11 and has proven to be exceptionally useful. (In fact, it has been copied by other designers.) “Nowadays it's hard to imagine hardware that doesn't have a notion of a stack, but consider that a stack isn't important if you don't need recursion.” “The design for the PDP-11 was laid down in 1969 and if we look at the programming languages of the time, FORTRAN and COBOL, neither supported recursive function calls. The function call sequence would often store the return address at a blank word at the start of the procedure making recursion impossible.” A fourth weakness, limited interrupt capability and slow context switching, was essentially solved with the device of UNIBUS interrupt vectors, which direct device interrupts. The basic mechanism is very fast, requiring only four memory cycles from the time an interrupt request is issued until the first instruction of the interrupt routine begins execution. A fifth weakness of prior minicomputers, inadequate character-handling capability, was met in the PDP-11 by providing direct byte addressing capability. “Strings and character handling were of increasing importance during the 1960's as scientific and business computing converged. The predominant character encodings at the time were 6 bit character sets which provided just enough space for upper case letters, the digits 0 to 9, space, and a few punctuation characters sufficient for printing financial reports.” “Because memory was so expensive, placing one 6 bit character into a 12 or 18 bit word was simply unacceptable so characters would be packed into words. This proved efficient for storage, but complex for operations like move, compare, and concatenate, which had to account for a character appearing in the top or bottom of the word, expending valuable words of program storage to cope.” “The problem was addressed in the PDP-11 by allowing the machine to operate on memory as both a 16-bit word, and the increasingly popular 8-bit byte. The expenditure of 2 additional bits per character was felt to be worth it for simpler string handling, and also eased the adoption of the increasingly popular 7-bit ASCII standard of which DEC were a proponent at the time. Bell concludes this point with the throw away line:” Although string instructions are not yet provided in the hardware, the common string operations (move, compare, concatenate) can be programmed with very short loops. A sixth weakness, the inability to use read-only memories, was avoided in the PDP-11. Most code written for the PDP-11 tends to be pure and reentrant without special effort by the programmer, allowing a read-only memory (ROM) to be used directly. A seventh weakness, one common to many minicomputers, was primitive I/O capabilities. A ninth weakness of minicomputers was the high cost of programming them. Many users program in assembly language, without the comfortable environment of editors, file systems, and debuggers available on bigger systems. The PDP-11 does not seem to have overcome this weakness, although it appears that more complex systems are being built successfully with the PDP-11 than with its predecessors, the PDP-8 and PDP-15. The problems faced by computer designers can usually be attributed to one of two causes: inexperience or second-systemitis Before the PDP-11, there was no UNIX. Before the PDP-11, there was no C, this is the computer that C was designed on. If you want to know why the classical C int is 16 bits wide, it's because of the PDP-11. UNIX bought us ideas such as pipes, everything is a file, and interactive computing. UNIX, which had arrived at Berkley in 1974 aboard a tape carried by Ken Thompson, would evolve into the west coast flavoured Berkley Systems Distribution. Berkeley UNIX had been ported to the VAX by the start of the 1980's and was thriving as the counter cultural alternative to DEC's own VMS operating system. Berkeley UNIX spawned a new generation of hackers who would go on to form companies like Sun micro systems, and languages like Self, which lead directly to the development of Java. UNIX was ported to a bewildering array of computer systems during the 80's and the fallout from the UNIX wars gave us the various BSD operating systems who continue to this day. The article, and the papers it is summarizing, contain a lot more than we could possibly dig into even if we dedicated the entire show to the topic *** News Roundup Two-factor authentication SSH with Duo in FreeBSD 11 (https://www.teachnix.com/2017/11/29/configuring-two-factor-authentication-on-freebsd-with-duo/) This setup uses an SSH key as the first factor of authentication. Please watch Part 1 on setting up SSH keys and how to scp it to your server. Video guide (https://www.youtube.com/watch?v=E5EuvF-iaV0) Register for a free account at Duo.com Install the Duo package on your FreeBSD server pkg install -y duo Log into the Duo site > Applications > Protect an Application > Search for Unix application > Protect this Application This will generate the keys we need to configure Duo. Edit the Duo config file using the course notes template vi /usr/local/etc/pam_duo.conf Example config [duo] ; Duo integration key ikey = Integration key goes here ; Duo secret key skey = Secret key goes here ; Duo API host host = API hostname goes here Change the permissions of the Duo config file. If the permissions are not correct then the service will not function properly. chmod 600 /usr/local/etc/pam_duo.conf Edit the SSHD config file using the course notes template vi /etc/ssh/sshd_config Example config ListenAddress 0.0.0.0 Port 22 PasswordAuthentication no UsePAM yes ChallengeResponseAuthentication yes UseDNS no PermitRootLogin yes AuthenticationMethods publickey,keyboard-interactive Edit PAM to configure SSHD for Duo using the course notes template Example config ``` # auth auth sufficient pamopie.so nowarn nofakeprompts auth requisite pamopieaccess.so nowarn allowlocal auth required /usr/local/lib/security/pamduo.so # session # session optional pamssh.so wantagent session required pam_permit.so # password # password sufficient pamkrb5.so nowarn tryfirstpass password required pamunix.so nowarn tryfirstpass ``` Restart the sshd service service sshd restart SSH into your FreeBSD server and follow the link it outputs to enroll your phone with Duo. ssh server.example.com SSH into your server again ssh server.example.com Choose your preferred method and it should log you into your server. FreeBSD 2017 Release Engineering Recap (https://www.freebsdfoundation.org/blog/2017-release-engineering-recap/) This past year was undoubtedly a rather busy and successful year for the Release Engineering Team. Throughout the year, development snapshot builds for FreeBSD-CURRENT and supported FreeBSD-STABLE branches were continually provided. In addition, work to package the base system using pkg(8) continued throughout the year and remains ongoing. The FreeBSD Release Engineering Team worked on the FreeBSD 11.1-RELEASE, with the code slush starting mid-May. The FreeBSD 11.1-RELEASE cycle stayed on schedule, with the final release build starting July 21, and the final release announcement following on July 25, building upon the stability and reliability of 11.0-RELEASE. Milestones during the 11.1-RELEASE cycle can be found on the 11.1 schedule page (https://www.freebsd.org/releases/11.1R/schedule.html). The final announcement is available here (https://www.freebsd.org/releases/11.1R/announce.html). The FreeBSD Release Engineering Team started the FreeBSD 10.4-RELEASE cycle, led by Marius Strobl. The FreeBSD 10.4-RELEASE cycle continued on schedule, with the only adjustments to the schedule being the addition of BETA4 and the removal of RC3. FreeBSD 10.4-RELEASE builds upon the stability and reliability of FreeBSD 10.3-RELEASE, and is planned to be the final release from the stable/10 branch. Milestones during the 10.4-RELEASE cycle can be found on the 10.4 schedule page (https://www.freebsd.org/releases/10.4R/schedule.html). The final announcement is available here (https://www.freebsd.org/releases/10.4R/announce.html). In addition to these releases, support for additional arm single-board computer images were added, notably Raspberry Pi 3 and Pine64. Additionally, release-related documentation effective 12.0-RELEASE and later has been moved from the base system repository to the documentation repository, making it possible to update related documentation as necessary post-release. Additionally, the FreeBSD Release Engineering article in the Project Handbook had been rewritten to outline current practices used by the Release Engineering Team. For more information on the procedures and processes the FreeBSD Release Engineering Team follows, the new article is available here and continually updated as procedures change. Finally, following the availability of FreeBSD 11.1-RELEASE, Glen Barber attended the September Developer Summit hosted at vBSDCon in Reston, VA, USA, where he gave a brief talk comprising of several points relating directly to the 11.1-RELEASE cycle. In particular, some of the points covered included what he felt went well during the release cycle, what did not go as well as it could have, and what we, as a Project, could do better to improve the release process. The slides from the talk are available in the FreeBSD Wiki. During the question and answer time following the talk, some questions asked included: Q: Should developers use the ‘Relnotes' tag in the Subversion commit template more loosely, at risk of an increase in false positives. A: When asked when the tag in the template was initially added, the answer would have been “no”, however in hindsight it is easier to sift through the false positives, than to comb through months or years of commit logs. Q: What issues are present preventing moving release-related documentation to the documentation repository? A: There were some rendering issues last time it was investigated, but it is really nothing more than taking the time to fix those issues. (Note, that since this talk, the migration of the documentation in question had moved.) Q: Does it make sense to extend the timeframe between milestone builds during a release cycle from one week to two weeks, to allow more time for testing, for example, RC1 versus RC2? A: No. It would extend the length of the release cycle with no real benefit between milestones since as we draw nearer to the end of a given release cycle, the number of changes to that code base significantly reduce. FLIMP - GIMP Exploit on FreeBSD (https://flimp.fuzzing-project.org) In 2014, when starting the Fuzzing Project (https://fuzzing-project.org/), Hanno Böck did some primitive fuzzing on GIMP and reported two bugs. They weren't fixed and were forgotten in the public bug tracker. Recently Tobias Stöckmann found one of these bugs (https://bugzilla.gnome.org/show_bug.cgi?id=739133) (CVE-2017-17785) and figured out that it's easy to exploit. What kind of bug is that? It's a classic heap buffer overflow in the FLIC parser. FLIC is a file format for animations and was introduced by Autodesk Animator. How does the exploit work? Tobias has created a detailed writeup (https://flimp.fuzzing-project.org/exploit.html). The exploit doesn't work for me! We figured out it's unreliable and the memory addresses are depending on many circumstances. The exploit ZIP comes with two variations using different memory addresses. Try both of them. We also noticed putting the files in a subdirectory sometimes made the exploit work. Anything more to tell about the GIMP? There's a wide variety of graphics formats. GIMP tries to support many of them, including many legacy formats that nobody is using any more today. While this has obvious advantages - you can access the old images you may find on a backup CD from 1995 - it comes with risks. Support for many obscure file formats means many parsers that hardly anyone ever looks at. So... what about the other parsers? The second bug (https://bugzilla.gnome.org/show_bug.cgi?id=739134) (CVE-2017-17786), which is a simple overread, was in the TGA parser. Furthermore we found buffer overreads in the XCF parser (https://bugzilla.gnome.org/show_bug.cgi?id=790783) (CVE-2017-17788), the Gimp Brush (GBR) parser (https://bugzilla.gnome.org/show_bug.cgi?id=790784) (CVE-2017-17784) and the Paint Shop Pro (PSP) parser (https://bugzilla.gnome.org/show_bug.cgi?id=790849) (CVE-2017-17789). We found another Heap buffer overflow (https://bugzilla.gnome.org/show_bug.cgi?id=790849) in the Paint Shop Pro parser (CVE-2017-17787) which is probably also exploitable. In other words: The GIMP import parsers are full of memory safety bugs. What should happen? First of all obviously all known memory safety bugs should be fixed. Furthermore we believe the way GIMP plugins work is not ideal for security testing. The plug-ins are separate executables, however they can't be executed on their own, as they communicate with the main GIMP process. Ideally either these plug-ins should be changed in a way that allows running them directly from the command line or - even better - they should be turned into libraries. The latter would also have the advantage of making the parser code useable for other software projects. Finally it might be a good idea to sandbox the import parsers. Dell FS12-NV7 Review – Bargain FreeBSD/ZFS box (http://blog.frankleonhardt.com/2017/dell-fs12-nv7-review-bargain-freebsdzfs-box/) It seems just about everyone selling refurbished data centre kit has a load of Dell FS12-NV7's to flog. Dell FS-what? You won't find them in the Dell catalogue, that's for sure. They look a bit like C2100s of some vintage, and they have a lot in common. But on closer inspection they're obviously a “special” for an important customer. Given the number of them knocking around, it's obviously a customer with big data, centres stuffed full of servers with a lot of processing to do. Here's a hint: It's not Google or Amazon. So, should you be buying a weirdo box with no documentation whatsoever? I'd say yes, definitely. If you're interests are anything like mine. In a 2U box you can get twin 4-core CPUs and 64Gb of RAM for £150 or less. What's not to like? Ah yes, the complete lack of documentation. Over the next few weeks I intend to cover that. And to start off this is my first PC review for nearly twenty years. As I mentioned, it's a 2U full length heavy metal box on rails. On the back there are the usual I/O ports: a 9-way RS-232, VGA, two 1Gb Ethernet, two USB2 and a PS/2 keyboard and mouse. The front is taken up by twelve 3.5″ hard drive bays, with the status lights and power button on one of the mounting ears to make room. Unlike other Dell servers, all the connections are on the back, only. So, in summary, you're getting a lot for your money if its the kind of thing you want. It's ideal as a high-performance Unix box with plenty of drive bays (preferably running BSD and ZFS). In this configuration it really shifts. Major bang-per-buck. Another idea I've had is using it for a flight simulator. That's a lot of RAM and processors for the money. If you forego the SAS controllers in the PCIe slots and dump in a decent graphics card and sound board, it's hard to see what's could be better (and you get jet engine sound effects without a speaker). So who should buy one of these? BSD geeks is the obvious answer. With a bit of tweaking they're a dream. It can build-absolutely-everything in 20-30 minutes. For storage you can put fast SAS drives in and it goes like the wind, even at 3Gb bandwidth per drive. I don't know if it works with FreeNAS but I can't see why not – I'm using mostly FreeBSD 11.1 and the generic kernel is fine. And if you want to run a load of weird operating systems (like Windows XP) in VM format, it seems to work very well with the Xen hypervisor and Dom0 under FreeBSD. Or CentOS if you prefer. So I shall end this review in true PCW style: Pros: Cheap Lots of CPUs, Lots of RAM Lots of HD slots Great for BSD/ZFS or VMs Cons: Noisy no AES-NI SAS needs upgrading Limited PCI slots As I've mentioned, the noise and SAS are easy and relatively cheap to fix, and thanks to BitCoin miners, even the PCI slot problem can be sorted. I'll talk about this in a later post. Beastie Bits Reflections on Hackathons (https://undeadly.org/cgi?action=article;sid=20171126090055) 7-Part Video Crash Course on SaltStack For FreeBSD (https://www.youtube.com/watch?v=HijG0hWebZk&list=PL5yV8umka8YQOr1wm719In5LITdGzQMOF) The LLVM Thread Sanitizer has been ported to NetBSD (https://blog.netbsd.org/tnf/entry/the_llvm_thread_sanitizer_has) The First Unix Port (1998) (http://bitsavers.informatik.uni-stuttgart.de/bits/Interdata/32bit/unix/univWollongong_v6/miller.pdf) arm64 platform now officially supported [and has syspatch(8)] (https://undeadly.org/cgi?action=article;sid=20171208082238) BSDCan 2018 Call for Participation (https://www.freebsdfoundation.org/news-and-events/call-for-papers/bsdcan-2018-call-for-participation/) AsiaBSDCon 2018 Call for Papers (https://www.freebsdfoundation.org/news-and-events/call-for-papers/asiabsdcon-2018-call-for-papers/) *** Feedback/Questions Shawn - DragonFlyBSD vagrant images (http://dpaste.com/3PRPJHG#wrap) Ben - undermydesk (http://dpaste.com/0AZ32ZB#wrap) Ken - Conferences (http://dpaste.com/3E8FQC6#wrap) Ben - ssh keys (http://dpaste.com/0E4538Q#wrap) SSH Chaining (https://www.bsdnow.tv/tutorials/ssh-chaining) ***

BSD Now
219: We love the ARC

BSD Now

Play Episode Listen Later Nov 8, 2017 130:29


Papers we love: ARC by Bryan Cantrill, SSD caching adventures with ZFS, OpenBSD full disk encryption setup, and a Perl5 Slack Syslog BSD daemon. This episode was brought to you by Headlines Papers We Love: ARC: A Self-Tuning, Low Overhead Replacement Cache (https://www.youtube.com/watch?v=F8sZRBdmqc0&feature=youtu.be) Ever wondered how the ZFS ARC (Adaptive Replacement Cache) works? How about if Bryan Cantrill presented the original paper on its design? Today is that day. Slides (https://www.slideshare.net/bcantrill/papers-we-love-arc-after-dark) It starts by looking back at a fundamental paper from the 40s where the architecture of general-purpose computers are first laid out The main is the description of memory hierarchies, where you have a small amount of very fast memory, then the next level is slower but larger, and on and on. As we look at the various L1, L2, and L3 caches on a CPU, then RAM, then flash, then spinning disks, this still holds true today. The paper then does a survey of the existing caching policies and tries to explain the issues with each. This includes ‘MIN', which is the theoretically optimal policy, which requires future knowledge, but is useful for setting the upper bound, what is the best we could possibly do. The paper ends up showing that the ARC can end up being better than manually trying to pick the best number for the workload, because it adapts as the workload changes At about 1:25 into the video, Bryan start talking about the practical implementation of the ARC in ZFS, and some challenges they have run into recently at Joyent. A great discussion about some of the problems when ZFS needs to shrink the ARC. Not all of it applies 1:1 to FreeBSD because the kernel and the kmem implementation are different in a number of ways There were some interesting questions asked at the end as well *** How do I use man pages to learn how to use commands? (https://unix.stackexchange.com/a/193837) nwildner on StackExchange has a very thorough answer to the question how to interpret man pages to understand complicated commands (xargs in this case, but not specifically). Have in mind what you want to do. When doing your research about xargs you did it for a purpose, right? You had a specific need that was reading standard output and executing commands based on that output. But, when I don't know which command I want? Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search. Read the descriptions and find one that will better fit your needs. Apropos works with regular expressions by default, (man apropos, read the description and find out what -r does), and on this example I'm looking for every manpage where the description starts with "report". Always read the DESCRIPTION before starting Take a time and read the description. By just reading the description of the xargs command we will learn that: xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands The default behavior is to act like /bin/echo. This gives you a little tip that if you need to chain more than one xargs, you don't need to use echo to print. We have also learned that unix filenames can contain blank and newlines, that this could be a problem and the argument -0 is a way to prevent things explode by using null character separators. The description warns you that the command being used as input needs to support this feature too, and that GNU find support it. Great. We use a lot of find with xargs. xargs will stop if exit status 255 is reached. Some descriptions are very short and that is generally because the software works on a very simple way. Don't even think of skipping this part of the manpage ;) Other things to pay attention... You know that you can search for files using find. There is a ton of options and if you only look at the SYNOPSIS, you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME, SYNOPSIS, and DESCRIPTION, you will have the following sections: When this method will not work so well... + Tips that apply to all commands Some options, mnemonics and "syntax style" travel through all commands making you buy some time by not having to open the manpage at all. Those are learned by practice and the most common are: Generally, -v means verbose. -vvv is a variation "very very verbose" on some software. Following the POSIX standard, generally one dash arguments can be stacked. Example: tar -xzvf, cp -Rv. Generally -R and/or -r means recursive. Almost all commands have a brief help with the --help option. --version shows the version of a software. -p, on copy or move utilities means "preserve permissions". -y means YES, or "proceed without confirmation" in most cases. Default values of commands. At the pager chunk of this answer, we saw that less -is is the pager of man. The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed. You will have to read the options to find out defaults, or if you are lucky, typing /pager will lead you to that info. This also requires you to know the concept of the pager(software that scrolls the manpage), and this is a thing you will only acquire after reading lots of manpages. And what about the SYNOPSIS syntax? After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts: Options are the switches that dictates a command behavior. "Do this" "don't do this" or "act this way". Often called switches. Check out the full answer and see if it helps you better grasp the meaning of a man page and thus the command. *** My adventure into SSD caching with ZFS (Home NAS) (https://robertputt.co.uk/my-adventure-into-ssd-caching-with-zfs-home-nas.html) Robert Putt as written about his adventure using SSDs for caching with ZFS on his home NAS. Recently I decided to throw away my old defunct 2009 MacBook Pro which was rotting in my cupboard and I decided to retrieve the only useful part before doing so, the 80GB Intel SSD I had installed a few years earlier. Initially I thought about simply adding it to my desktop as a bit of extra space but in 2017 80GB really wasn't worth it and then I had a brainwave… Lets see if we can squeeze some additional performance out of my HP Microserver Gen8 NAS running ZFS by installing it as a cache disk. I installed the SSD to the cdrom tray of the Microserver using a floppy disk power to SATA power converter and a SATA cable, unfortunately it seems the CD ROM SATA port on the motherboard is only a 3gbps port although this didn't matter so much as it was an older 3gbps SSD anyway. Next I booted up the machine and to my suprise the disk was not found in my FreeBSD install, then I realised that the SATA port for the CD drive is actually provided by the RAID controller, so I rebooted into intelligent provisioning and added an additional RAID0 array with just the 1 disk to act as my cache, in fact all of the disks in this machine are individual RAID0 arrays so it looks like just a bunch of disks (JBOD) as ZFS offers additional functionality over normal RAID (mainly scrubbing, deduplication and compression). Configuration Lets have a look at the zpool before adding the cache drive to make sure there are no errors or uglyness: Now lets prep the drive for use in the zpool using gpart. I want to split the SSD into two seperate partitions, one for L2ARC (read caching) and one for ZIL (write caching). I have decided to split the disk into 20GB for ZIL and 50GB for L2ARC. Be warned using 1 SSD like this is considered unsafe because it is a single point of failure in terms of delayed writes (a redundant configuration with 2 SSDs would be more appropriate) and the heavy write cycles on the SSD from the ZIL is likely to kill it over time. Now it's time to see if adding the cache has made much of a difference. I suspect not as my Home NAS sucks, it is a HP Microserver Gen8 with the crappy Celeron CPU and only 4GB RAM, anyway, lets test it and find out. First off lets throw fio at the mount point for this zpool and see what happens both with the ZIL and L2ARC enabled and disabled. Observations Ok, so the initial result is a little dissapointing, but hardly unexpected, my NAS sucks and there are lots of bottle necks, CPU, memory and the fact only 2 of the SATA ports are 6gbps. There is no real difference performance wise in comparison between the results, the IOPS, bandwidth and latency appear very similar. However lets bare in mind fio is a pretty hardcore disk benchmark utility, how about some real world use cases? Next I decided to test a few typical file transactions that this NAS is used for, Samba shares to my workstation. For the first test I wanted to test reading a 3GB file over the network with both the cache enabled and disabled, I would run this multiple times to ensure the data is hot in the L2ARC and to ensure the test is somewhat repeatable, the network itself is an uncongested 1gbit link and I am copying onto the secondary SSD in my workstation. The dataset for these tests has compression and deduplication disabled. Samba Read Test Not bad once the data becomes hot in the L2ARC cache reads appear to gain a decent advantage compared to reading from the disk directly. How does it perform when writing the same file back accross the network using the ZIL vs no ZIL. Samba Write Test Another good result in the real world test, this certainately helps the write transfer speed however I do wonder what would happen if you filled the ZIL transferring a very large file, however this is unlikely with my use case as I typically only deal with a couple of files of several hundred megabytes at any given time so a 20GB ZIL should suit me reasonably well. Is ZIL and L2ARC worth it? I would imagine with a big beefy ZFS server running in a company somewhere with a large disk pool and lots of users with multiple enterprise level SSD ZIL and L2ARC would be well worth the investment, however at home I am not so sure. Yes I did see an increase in read speeds with cached data and a general increase in write speeds however it is use case dependant. In my use case I rarely access the same file frequently, my NAS primarily serves as a backup and for archived data, and although the write speeds are cool I am not sure its a deal breaker. If I built a new home NAS today I'd probably concentrate the budget on a better CPU, more RAM (for ARC cache) and more disks. However if I had a use case where I frequently accessed the same files and needed to do so in a faster fashion then yes, I'd probably invest in an SSD for caching. I think if you have a spare SSD lying around and you want something fun todo with it, sure chuck it in your ZFS based NAS as a cache mechanism. If you were planning on buying an SSD for caching then I'd really consider your needs and decide if the money can be spent on alternative stuff which would improve your experience with your NAS. I know my NAS would benefit more from an extra stick of RAM and a more powerful CPU, but as a quick evening project with some parts I had hanging around adding some SSD cache was worth a go. More Viewer Interview Questions for Allan News Roundup Setup OpenBSD 6.2 with Full Disk Encryption (https://blog.cagedmonster.net/setup-openbsd-with-full-disk-encryption/) Here is a quick way to setup (in 7 steps) OpenBSD 6.2 with the encryption of the filesystem. First step: Boot and start the installation: (I)nstall: I Keyboard Layout: ENTER (I'm french so in my case I took the FR layout) Leave the installer with: ! Second step: Prepare your disk for encryption. Using a SSD, my disk is named : sd0, the name may vary, for example : wd0. Initiating the disk: Configure your volume: Now we'll use bioctl to encrypt the partition we created, in this case : sd0a (disk sd0 + partition « a »). Enter your passphrase. Third step: Let's resume the OpenBSD's installer. We follow the install procedure Fourth step: Partitioning of the encrypted volume. We select our new volume, in this case: sd1 The whole disk will be used: W(hole) Let's create our partitions: NB: You are more than welcome to create multiple partitions for your system. Fifth step: System installation It's time to choose how we'll install our system (network install by http in my case) Sixth step: Finalize the installation. Last step: Reboot and start your system. Put your passphrase. Welcome to OpenBSD 6.2 with a full encrypted file system. Optional: Disable the swap encryption. The swap is actually part of the encrypted filesystem, we don't need OpenBSD to encrypt it. Sysctl is giving us this possibility. Step-by-Step FreeBSD installation with ZFS and Full Disk Encryption (https://blog.cagedmonster.net/step-by-step-freebsd-installation-with-full-disk-encryption/) 1. What do I need? For this tutorial, the installation has been made on a Intel Core i7 - AMD64 architecture. On a USB key, you would probably use this link : ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-mini-memstick.img If you can't do a network installation, you'd better use this image : ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-memstick.img You can write the image file on your USB device (replace XXXX with the name of your device) using dd : # dd if=FreeBSD-11.1-RELEASE-amd64-mini-memstick.img of=/dev/XXXX bs=1m 2. Boot and install: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F1.png) 3. Configure your keyboard layout: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F2.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F3.png) 4. Hostname and system components configuration : Set the name of your machine: [Screenshot](https://blog.cagedmonster.net/content/images/2017/09/F4.png_ What components do you want to install? Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F5.png) 5. Network configuration: Select the network interface you want to configure. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F6.png) First, we configure our IPv4 network. I used a static adress so you can see how it works, but you can use DHCP for an automated configuration, it depends of what you want to do with your system (desktop/server) Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F7.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F7-1.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F8.png) IPv6 network configuration. Same as for IPv4, you can use SLAAC for an automated configuration. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F9.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F10-1.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F10-2.png) Here, you can configure your DNS servers, I used the Google DNS servers so you can use them too if needed. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F11.png) 6. Select the server you want to use for the installation: I always use the IPv6 mirror to ensure that my IPv6 network configuration is good.Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F12.png) 7. Disk configuration: As we want to do an easy full disk encryption, we'll use ZFS. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F13.png) Make sure to select the disk encryption :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F14.png) Launch the disk configuration :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F15.png) Here everything is normal, you have to select the disk you'll use :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F16.png) I have only one SSD disk named da0 :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F17.png) Last chance before erasing your disk :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F18.png) Time to choose the password you'll use to start your system : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F19.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F20.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F21.png) 8. Last steps to finish the installation: The installer will download what you need and what you selected previously (ports, src, etc.) to create your system: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22.png) 8.1. Root password: Enter your root password: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22-1.png) 8.2. Time and date: Set your timezone, in my case: Europe/France Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22-2.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F23.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F23-1.png) Make sure the date and time are good, or you can change them :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F24.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F25.png) 8.3. Services: Select the services you'll use at system startup depending again of what you want to do. In many cases powerd and ntpd will be useful, sshd if you're planning on using FreeBSD as a server. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26.png) 8.4. Security: Security options you want to enable. You'll still be able to change them after the installation with sysctl. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-1.png) 8.5. Additionnal user: Create an unprivileged system user: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-2.png) Make sure your user is in the wheel group so he can use the su command. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-3.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-4.png) 8.6. The end: End of your configuration, you can still do some modifications if you want : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-5.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-6.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-7.png) 9. First boot: Enter the passphrase you have chosen previously : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F27.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F28.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F29.png) Welcome to Freebsd 11.1 with full disk encryption! *** The anatomy of ldd program on OpenBSD (http://nanxiao.me/en/the-anatomy-of-ldd-program-on-openbsd/) In the past week, I read the ldd (https://github.com/openbsd/src/blob/master/libexec/ld.so/ldd/ldd.c) source code on OpenBSD to get a better understanding of how it works. And this post should also be a reference for other*NIX OSs. The ELF (https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) file is divided into 4 categories: relocatable, executable, shared, and core. Only the executable and shared object files may have dynamic object dependencies, so the ldd only check these 2 kinds of ELF file: (1) Executable. ldd leverages the LD_TRACE_LOADED_OBJECTS environment variable in fact, and the code is as following: if (setenv("LD_TRACE_LOADED_OBJECTS", "true", 1) < 0) err(1, "setenv(LD_TRACE_LOADED_OBJECTS)"); When LDTRACELOADED_OBJECTS is set to 1 or true, running executable file will show shared objects needed instead of running it, so you even not needldd to check executable file. See the following outputs: $ /usr/bin/ldd usage: ldd program ... $ LD_TRACE_LOADED_OBJECTS=1 /usr/bin/ldd Start End Type Open Ref GrpRef Name 00000b6ac6e00000 00000b6ac7003000 exe 1 0 0 /usr/bin/ldd 00000b6dbc96c000 00000b6dbcc38000 rlib 0 1 0 /usr/lib/libc.so.89.3 00000b6d6ad00000 00000b6d6ad00000 rtld 0 1 0 /usr/libexec/ld.so (2) Shared object. The code to print dependencies of shared object is as following: if (ehdr.e_type == ET_DYN && !interp) { if (realpath(name, buf) == NULL) { printf("realpath(%s): %s", name, strerror(errno)); fflush(stdout); _exit(1); } dlhandle = dlopen(buf, RTLD_TRACE); if (dlhandle == NULL) { printf("%sn", dlerror()); fflush(stdout); _exit(1); } _exit(0); } Why the condition of checking a ELF file is shared object or not is like this: if (ehdr.e_type == ET_DYN && !interp) { ...... } That's because the file type of position-independent executable (PIE) is the same as shared object, but normally PIE contains a interpreter program header since it needs dynamic linker to load it while shared object lacks (refer this article). So the above condition will filter PIE file. The dlopen(buf, RTLD_TRACE) is used to print dynamic object information. And the actual code is like this: if (_dl_traceld) { _dl_show_objects(); _dl_unload_shlib(object); _dl_exit(0); } In fact, you can also implement a simple application which outputs dynamic object information for shared object yourself: # include int main(int argc, char **argv) { dlopen(argv[1], RTLD_TRACE); return 0; } Compile and use it to analyze /usr/lib/libssl.so.43.2: $ cc lddshared.c $ ./a.out /usr/lib/libssl.so.43.2 Start End Type Open Ref GrpRef Name 000010e2df1c5000 000010e2df41a000 dlib 1 0 0 /usr/lib/libssl.so.43.2 000010e311e3f000 000010e312209000 rlib 0 1 0 /usr/lib/libcrypto.so.41.1 The same as using ldd directly: $ ldd /usr/lib/libssl.so.43.2 /usr/lib/libssl.so.43.2: Start End Type Open Ref GrpRef Name 00001d9ffef08000 00001d9fff15d000 dlib 1 0 0 /usr/lib/libssl.so.43.2 00001d9ff1431000 00001d9ff17fb000 rlib 0 1 0 /usr/lib/libcrypto.so.41.1 Through the studying of ldd source code, I also get many by-products: such as knowledge of ELF file, linking and loading, etc. So diving into code is a really good method to learn *NIX deeper! Perl5 Slack Syslog BSD daemon (https://clinetworking.wordpress.com/2017/10/13/perl5-slack-syslog-bsd-daemon/) So I have been working on my little Perl daemon for a week now. It is a simple syslog daemon that listens on port 514 for incoming messages. It listens on a port so it can process log messages from my consumer Linux router as well as the messages from my server. Messages that are above alert are sent, as are messages that match the regex of SSH or DHCP (I want to keep track of new connections to my wifi). The rest of the messages are not sent to slack but appended to a log file. This is very handy as I can get access to info like failed ssh logins, disk failures, and new devices connecting to the network all on my Android phone when I am not home. Screenshot (https://clinetworking.files.wordpress.com/2017/10/screenshot_2017-10-13-23-00-26.png) The situation arose today that the internet went down and I thought to myself what would happen to all my important syslog messages when they couldn't be sent? Before the script only ran an eval block on the botsend() function. The error was returned, handled, but nothing was done and the unsent message was discarded. So I added a function that appended unsent messengers to an array that are later sent when the server is not busy sending messages to slack. Slack has a limit of one message per second. The new addition works well and means that if the internet fails my server will store these messages in memory and resend them at a rate of one message per second when the internet connectivity returns. It currently sends the newest ones first but I am not sure if this is a bug or a feature at this point! It currently works with my Linux based WiFi router and my FreeBSD server. It is easy to scale as all you need to do is send messages to syslog to get them sent to slack. You could sent CPU temp, logged in users etc. There is a github page: https://github.com/wilyarti/slackbot Lscpu for OpenBSD/FreeBSD (http://nanxiao.me/en/lscpu-for-openbsdfreebsd/) Github Link (https://github.com/NanXiao/lscpu) There is a neat command, lscpu, which is very handy to display CPU information on GNU/Linux OS: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 But unfortunately, the BSD OSs lack this command, maybe one reason is lscpu relies heavily on /proc file system which BSD don't provide, :-). TakeOpenBSD as an example, if I want to know CPU information, dmesg should be one choice: $ dmesg | grep -i cpu cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz, 2527.35 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM, PBE,SSE3,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,XSAVE,NXE,LONG,LAHF,PERF,SENSOR cpu0: 3MB 64b/line 8-way L2 cache cpu0: apic clock running at 266MHz cpu0: mwait min=64, max=64, C-substates=0.2.2.2.2.1.3, IBE But the output makes me feeling messy, not very clear. As for dmidecode, it used to be another option, but now can't work out-of-box because it will access /dev/mem which for security reason, OpenBSD doesn't allow by default (You can refer this discussion): $ ./dmidecode $ dmidecode 3.1 Scanning /dev/mem for entry point. /dev/mem: Operation not permitted Based on above situation, I want a specified command for showing CPU information for my BSD box. So in the past 2 weeks, I developed a lscpu program for OpenBSD/FreeBSD, or more accurately, OpenBSD/FreeBSD on x86 architecture since I only have some Intel processors at hand. The application getsCPU metrics from 2 sources: (1) sysctl functions. The BSD OSs provide sysctl interface which I can use to get general CPU particulars, such as how many CPUs the system contains, the byte-order of CPU, etc. (2) CPUID instruction. For x86 architecture, CPUID instruction can obtain very detail information of CPU. This coding work is a little tedious and error-prone, not only because I need to reference both Intel and AMD specifications since these 2 vendors have minor distinctions, but also I need to parse the bits of register values. The code is here (https://github.com/NanXiao/lscpu), and if you run OpenBSD/FreeBSD on x86 processors, please try it. It will be better you can give some feedback or report the issues, and I appreciate it very much. In the future if I have other CPUs resource, such as ARM or SPARC64, maybe I will enrich this small program. *** Beastie Bits OpenBSD Porting Workshop - Brian Callahan will be running an OpenBSD porting workshop in NYC for NYC*BUG on December 6, 2017. (http://daemonforums.org/showthread.php?t=10429) Learn to tame OpenBSD quickly (http://www.openbsdjumpstart.org/#/) Detect the operating system using UDP stack corner cases (https://gist.github.com/sortie/94b302dd383df19237d1a04969f1a42b) *** Feedback/Questions Awesome Mike - ZFS Questions (http://dpaste.com/1H22BND#wrap) Michael - Expanding a file server with only one hard drive with ZFS (http://dpaste.com/1JRJ6T9) - information based on Allan's IRC response (http://dpaste.com/36M7M3E) Brian - Optimizing ZFS for a single disk (http://dpaste.com/3X0GXJR#wrap) ***

The Social Media Clarity Podcast
Ban the Banhammer

The Social Media Clarity Podcast

Play Episode Listen Later Sep 26, 2017 23:35


Ban the Banhammer - Episode 28 Scott and Randy discuss the (mis)use of the various forms of the "ban" tool, and provide alternative techniques. Show Links It's Almost Impossible to Rehabilitate an Online Troll, Steve Brock Director of Moderation Services at Mzinga Building Web Reputation Systems SMC Epsiode 14: LinkedIn's Scarlet Letter #CMAD presents: Modern Moderation: Moving Beyond Trolls and Ban Hammers (stream)Streamed live on Jan 26, 2016 - Join us to talk tech with Justin Isaf, to ramble about reputations with F. Randall Farmer, to ponder proactive tasks with Sarah Hawk, to advocate for automation with Darren Gough, and to learn the legal aspects with Aurelia Butler-Ball. Transcript Randy: Ban the ban hammer. Scott: What? Randy: Ban the ban hammer. Scott: Wait a minute. We gotta talk about this. Randy: Welcome to the Social Media Clarity Podcast. 15 minutes of concentrated analysis and advice about social media and platform and product design. Scott: So in this episode we're going to focus on what seems to be the moderator's tool of choice, the ban. Randy: And how it is, most often in our experience, the wrong tool. Scott: Yeah, it could be the right tool in the right circumstance, but it's mostly misapplied. Randy: Yeah, if you're reaching for it first, it's probably the wrong tool, but we'll talk about that in detail. We could set this up by talking a little bit about our experiences encountering other people talking about the ban hammer. Scott, you find a wonderful reference post. Do you want to tell us a little bit about it? Scott: Sure. So, this is just an example. Steve Brock who's the director of moderation services at Mzinga that talked about the difficulties of rehabilitating online trolls. But in it he's talking a lot about bans. How to identify trolls, how to ban, what different bans are, and how to apply them, and whether or not they are effective. So this is a good example of how a lot people tend to think about dealing with misbehavior in online communities. Randy: Although, we're going to be picking a little bit on Mr. Brock, by no means is he unique in most of these positions. In fact, were going to talk a little bit about how each point has its own challenges and each point carries forward the error of the previous one, and it leads you to a place this is both undesirable and expensive. Scott: So what are the steps, Randy Randy: The steps are first you identify the troll, figure out who it is you're wanting to take action on. Someone who is doing harm to your site. You perma-ban them. We'll explain the different bans in a few minutes. The idea is to kick them off the site and make their identity no longer accessible. He also suggests removing all content, doesn't mean they're all bad. And if they return with a new account, immediately ban that as soon as you detect that. You could try hell-banning. This is number five, he says, "But they'll find out." He's actually right about that, and then he says, "Well, the abuse will get worse once they figure out you're pulling tricks on them." It turns out, number six, you have to assume that they can't be reformed, so you've got to stay vigilant. You have to stay on it all the time. And for his final step he says, "Therefore, you need 24 7 coverage and so you need to hire enough moderators to cover your site." It's our contention that this entire list that leads to outsourced 24 7 coverage of your site and a constant battle with removing people follows from errors starting at the very beginning of the list. Scott: But before we get into that, let's actually define a few of the terms that have come up already. Perma-ban, it's a ban that is based on the identity, banning the account. There's no fixed time out. That's it. You ban them, they're gone. You can ban based on their account. You can hit their IP address. You could try to ban them based on credit card so they can't start a new account if you're a paid account. Also, there's kind of this nuclear option of removing all of the content. Regardless of whether some of the content was actually good, you ban the person, therefore, their content must also go too. So, that's the perma-ban. Randy: I'll talk a little bit about hell-banning, which I mentioned earlier. This is also known as shadow-banning, stealth-banning, or ghost-banning. It's strange. It's hiding content from the community except for the creator of the content, the person you're hell-banning. The idea is, you'll see that you posted, but no one else will see the post. Meant to be discouraging, or meant to just let you burn out your energy. It goes all the way back to The Well. The Well had a method for doing this where people would actually selectively stop reading content from other people, and this led to destroyed threads that no one what was going on in the thread because you could never tell who was actually reading what. I want to quickly, even though I know were just making a list, I want to go ahead and shoot hell-banning in the head- Scott: Yup. Randy: So we don't have to talk about it much more. Because this involves a bunch of complicated technology, which is trivially defeated by anyone who is malicious and confusing to those who aren't. Scott: Yup. Randy: The community doesn't know what they're missing, and when someone knows someone and talks about it and they find out they're hell-banned, you end up with the community talking about hell-banning, not whatever your content is. Scott: Right. Randy: Scott and I are both unified. There's a lot of moderators like us. Do not waste any time on technology to hide from your user that their behavior is unacceptable. Scott: It wastes a lot of time because it leads inevitably to two results. Steve Brock calls it out perfectly. They'll find out, and then they just abuse even more and even harder because they feel like they've been cheated in some way. And then the other one is anyone else who isn't hell-banned or ghost-banned gets paranoid about whether or not they've been ghost-banned. If any technical glitch occurs, then they suddenly think that some action has been taken against them. This is not healthy for your community. You'll spend time assuaging people of their paranoia then you will building the actual community and trust. It destroys trust. So those are two types of bans. There's another type of ban, if you will, and it's the time out. It's a temporary suspension from being able to contribute in some particular way. This breaks up into a couple of ways. You can limit somebody's permission where they can't post, or they can't reply. You can even limit their ability to log into the system, but the idea is that it's a time out, so that you can communicate with the person. Or you can degrade their service. Randy, you have some good stories about degrading service. Randy: There are often reputations systems for detecting egregious behaviors, and I'm talking about is specifically, the spamming behavior. I worked for Yahoo for five years. When they would detect either a mail spamming bot, or a bot hitting the search engine to get results to use to make SCO, what they didn't do is ban IP addresses. What they did instead was build a reputation database and degraded service. What that meant was, when a request would come in from a highly suspected spamming robot, they would serve it, they would just serve it very slowly. This is kind of a low level taxing. What happens if you ban them all, we've seen one of the few days yahoo was actually down, they made a change to their interface for search, and all the spamming robots in the world that were hitting yahoo started failing instantly. They were getting instant errors back from the web servers. This was creating a denial of service attack, as a result of all the robots who were never used to failing now retrying instantaneously. So hundreds of thousands of robots were now sending hundreds of requests per minute. Scott: Yeah, that's bad. Randy: They put back the interface because there was this kind of detante in degraded service design. Scott: So that's not the same thing as ghost-banning, that's just degrading somebody's service because it's targeted. Spammers want to be able to spread their spam as quickly as possible and move on to the next target, and if you slow them down, you're actually costing them money. Randy: Spamming behavior is different from whatever trolling behavior is. The reason we say ban the banhammer is because cases like we've outlined here are missing the key point. The category error is the difference between troll and trolling. The difference between being a spammer, a person, and spamming. We really have a problem with online social contributions. It isn't people, it's behaviors. The only thing you can really evaluate is the content. It's trolling that's the problem, not trolls. Scott: Right. It's really important, and we've talked about his in the past, and I talk about this when I give workshops, you focus on the behavior, not on the person. In Sociology, there's a thing called the fundamental attribution error, and that's basically when you take a behavior and you ascribe that as a personality trait to a person. So if somebody does something that is a violation of your terms of service, they post something that is borderline racist, they are not necessarily a racist. They are not necessarily a troll. They've done something and then that's a specific behavior that can be addressed as opposed to simply assuming this is who they are and they'll never ever be different. You do wind up in the exactly that idea of trolls can never be reformed if you make certain assumptions that their behavior is tied intrinsically to their personality. We just know that's not true. Randy: We even know that ID's aren't people. Back to the post, the person comes back over and over with multiple ID's. So an ID banning solution is no solution at all. But sometimes, it's the reverse. Sometimes there's no person. When we start talking about spamming, the spammer, the mythical person who is doing the spamming is not reachable. He's got a hundred thousand robots doing stuff. You don't even know where he is. You don't know how to reach him. You can't back through those robots. It's the robots that are exhibiting the behavior. So you have to deal with the robots in that case. In the case of trolling, you have to deal with the trolling posts. What are the things that are causing a problem. What are against your terms of service or your community guidelines. Scott: We're saying ban the banhammer. When you're reaching that as your first tool, it's probably the wrong thing to reach for, but there are times when we do need to use this kind of a tool in specific instances, and spamming is one of those instances. Scott: Let's define it a little bit better, because a lot of people will call all kinds of things spamming, including just an off color comment. Scott: They have zero or even negative quality to your community. They have absolutely no contribution at all. They're not even part of the discussion. There's a lot of them, or they're coming really fast. There's not a human behind those particular posts. At that particular point, what we're doing is we're throttling the input. Instead of treating it as a community problem. We're treating it as a bandwidth problem. Randy: And bans are not my first tool for dealing with that. My first tool for dealing with that is content hiding, described at the end of my book, "Building Web Reputation Systems." In the final chapter we talk about how we enabled users on Yahoo Answers to mark items, such as spam, and we started to trust them. We came up with a method by which we could trust them and we could literally, within 30 seconds of when a piece of spam would come up, it would be hidden from the network. What hidden means is kind of the opposite of hell-banning. That item disappears for everybody, and a notice is sent back to the author, and this deals with that problem, which is if the author is just a robot, the author can't mitigate it. You won't be sending back a note saying, "No, no, no, this is my real content. I got ganged up on," or something. This is why, when we turned on this mechanism on Yahoo Answers, spamming vanished, literally within two weeks. The spammers picked up and they left and they went somewhere else. Scott: And that's because you were using the crowd to surgically remove the bad content. Randy: Yes. So the point there is there was no banning of the user account. It wasn't necessary. The user account became inactive because it no longer could successfully post. Scott: You didn't have to ban anybody. They abandoned their effort. Randy: That same process is used on accounts that are more tightly tied to people, the people who care about their postings. If they have them reported using the same mechanism, not for spamming, but for tastelessness, or some breaking of the rules. The same mechanism will trigger. The content could be hidden and they would receive a note explaining to them what the community gave them as feedback about what needs to change, and they could change it. They weren't banned. The problem with the ban is when it does tie to a person, it's an ending. It's an invitation either to an escalation or an ending. It's the last thing you should ever do if you do it. If the first thing you do is ban someone, they can't correct the behavior, and you come off very poorly. Scott: It's a slap in the face. Randy: The customer lost forever. Scott: At Schwab Learning, I had the ability to ban people, but we never did. We dealt with spam by an escalation process. It would evaluate what came in, and I would either pull the content, or I would hold the content, but I would always contact the person. I had different levels of contact. I had the, "Oh, you made a mistake. What you wrote looks like spam. I'd love to hear more about you." And so that's not an ending. I was opening up a bigger beginning. "Tell me more. Please participate more, and prove to me that this isn't just spam, but it looks like spam, so I'm worried about it." Then, there was the self-promoters. There was this gray area about solicitation in that particular community. So we'd have some people who were very well meaning, and would make their own products, and would want to promote them to other parents, and I would say, "Hey, you know, I'm really sorry but self promotion is not okay, but if you want to talk about other things, and you want to promote this on your profile, when you talk about other things on our community, people will see your profile, and then they'll see what you're trying to advertise. We're giving you that space to be able to do that. Then there was, "That's it. You're a spammer. I've pulled your content. You violated my terms of service. Please don't come back. I've canceled things." That always gave a chance for somebody to give me a response back. We didn't have a huge amount of spam, but invariably if it was that bad, nobody responded. But I would usually get some kind of response from the other messages from anything from, "Oops, I'm sorry," to "How dare you." We took it from there. But that was a discussion. Randy: Yeah. So what you want is beginnings, right? You want dialogues, as much as you can afford them. If you're going to pay people to moderate, they should be having conversations, not just destroying them. Scott: Ideally. Unfortunately, a lot of moderation services aren't really set up that way. They're set up to remove content based on terms of service. It's difficult to find moderation services that you can spend the money, to take the time, to help actually foster communities. It's a shame. Randy: It's mostly a scaling problem. If you've got user generated content in great quantity, as I mentioned earlier, if you're going to invest in tools, don't invest in hell-banning. Invest instead in customer feedback so that the users can tell each other how to behave and reinforce that behavior. That's one way to increase the leverage you get out of your paid moderation people so they can spend time with specific cases that need their attention. If the community is keeping the new kid who shows up and doesn't know how to behave from posting a crappy question or answer on Stack Overflow, then you don't need paid moderators. In fact, Stack Overflow is one of the largest, richest communities with the highest quality content of it's type in the world and it has very, very few top level moderators. All moderation tasks are actually done by any contributor who cares enough to generate enough content of enough quality on the site. If you ever want to see what a site looks like when it doesn't live with a banhammer as it's first line of defense, Stack Overflow or any of the Stack Exchange servers are really interesting to show how they incrementally give authority to you as you succeed in contributing to the community. Scott: At Schwab Learning, there was a point where I started to teach the other community members exactly what I was doing. I decided there was a point where I said, "I'm just going to be transparent about this, and start doing this in the public." Especially when it was the nice stuff, and started showing people how I approached potential spammers by addressing their behavior and saying, "Hey, maybe this is a mistake. We'd like to hear more from you." The community picked up on it. I was no longer the first line of defense against spam. The community became the first line of defense. What they would do is they would engage with anyone who looked like it was spam, and they would try to talk to them and draw them out, and if they weren't able to draw them out, then they would bump it up to me and say, "I think this person's actually trying to spam us." Randy: I do consulting on social media product design, and discussions about moderation are a critical part of what I consult on. So new clients often give me administrator access, or moderator access to their communities so that I can see what's going on behind the scenes. One of my clients, I was looking around for moderation information and I discovered a profile for a user, and they had a field on the profile that the administrator's could see that said how often they'd been banned. This person had been banned six times. This is the example of the banhammer going insane. These are perma-bans. You prevent them from participating and then apparently, they could appeal, and then you could put them back on, and then they ban them again for similar behaviors and so on. The banhammer is the wrong tool. In fact, every offense was the same, and it was a minor offense. Technology changes to the software to discourage that behavior that would have been more effective in changing the behavior. Scott: So we're saying ban the banhammer and we've been giving hints at things to do instead, ways to think about behavior online that won't cause you to think, "I've got to use that banhammer right away." And so, let's get real detailed and talk about exactly what you can do instead of using the banhammer. Randy: Number one, start by defining the behaviors you want to encourage and the ones you're trying to discourage from your contributors in your community. This should be baseline for choosing the actions you take going forward. Scott: These behaviors are not people. Yes, you have your community guidelines, but understand that if somebody violates a community guideline you don't punish the person. You give them an opportunity to correct the behavior. Randy: Amen. So based on your available resources, you can either develop tools to facilitate the community marking the content, to give feedback privately to the contributor, so that they know they should make some changes. Scott: Giving them a chance means that you're focusing on the content that they're producing. If a piece of content is clearly violating your terms of service, or a piece of content is clearly being generated by a bot, or it's clearly a spam going off to somewhere else, or it's illegal, then yes, you're going to want to remove the content. Randy: If you're at scale, you need the tools you need to find them. Sometimes your community is small, and a personal conversation is the right choice. Other times, your community is huge and you have to have the tools to scale or you will never solve the problem. People have tried to buy the solution with human moderation, and they've all given up. At scale, you need help. You need tools, and if you're lucky you can get tools to enable your community to do a lot of the basic work. Scott: Even if you're not at scale, don't overlook the ability to enlist your community in helping you identify and correct behaviors of people who are coming into your community. We're talking about avoiding the banhammer, which is a tool, and we're talking about all the other ways you can reduce damaging behavior in your community, and these are skills that anybody can employ including your community. So you can teach your community the same things. Teach them to engage and try to sus out the difference between is this person actually trying to harm you, or is this person just making a misstep about their behavior? If they can't handle it, then you become the escalation process. You then support your community as a community and you can get a small amount of scale even with a small community out of this. Randy: Very true. And you might be able to get incremental tool development to support the community. So for example, if you don't have a community platform, if you don't have report abuse as a button on content, assuming you have that much incremental tool development maybe significantly cheaper if all you want to do is count the number of people who mark this thing as violating the terms of service. Stack Overflow has a tool I'm not a big fan of but it's functional. You can spend one of your points to actually give someone a negative score. I don't like the math of this, but if enough negatives go in fast enough, they immediately read it as a hidden content line. So they get community feedback immediately, and then there's an escalation process that can occur to appeal. They recently changed this to improve the initial response, negative feedback pattern by changing the name from deleted to on hold, which invites a conversation and to have a community practice of, is it you leaving negative, one, or anything other than the most obvious spamming behavior, you should leave a comment about how to improve the post. It's kind of a social system that they've evolved to go along with their mechanical systems. A mechanical system doesn't have to be complicated, but it provides a mechanism for social evolution. Scott: Reframing the idea of flagging away from this is bad, it shouldn't be here, to this is problematic, and we want to fix it. Randy: I consulted on discourse.org's moderation mechanism and it does just that. When several people mark a thing as a problem, and the problem is not illegal or spam, but it's a content problem, the content still gets hidden, but the message that goes the user invites them to edit it, to fix based on the feedback from the community, and if they do edit it, it will be able to be re-posted immediately with no flags on it. So we say, "You can fix it. You can go back to square zero with this post, immediately. Give it a try." So, we presume that if it's not the most egregious kind of errors that the content hiding will be temporary until the problem is resolved. This is how people can learn the behavior that is expected of them in the community. Scott: I would like to see a lot more systems offering something like that. All too often, it is a post and punish model. You post it and either it goes away, and you're punished somehow, or you succeeded and it stays. This is what missing from a lot of these is that we're just not giving enough people chances and giving them the agency and the respect to actually change their behavior. Randy: This leads to the kind of thinking that was in the article when it said that, "Trolls are irredeemable." What do you think it's going to take if you never accepted their bad stuff from the beginning, and the community said if you want to post here, please don't be a dick, and there's a dick button, they will learn to conform, or they will leave. You don't have to kick them out because their content never appears. And by the way, it turns out to be the same pattern. So the pattern is, "Do I post things that are to my only benefit and to the harm of others, or do I contribute to this community?" The definitions of those vary from place to place, and it is the community who can help you enforce them as well as your moderators. So your moderators can focus on the real exceptions. Randy: Ban the banhammer. Scott: Ban the banhammer. Randy: Alright, we should say goodbye though. Scott: Oh yes. We should say goodbsye. Thank you very much for listening. We hope that this has been some help. So, don't reach for the banhammer. Randy: Yes, people are not nails. Catch you later. Randy: For links, transcripts, and more episodes, go to socialmediaclarity.net. Thanks for listening!

The Busy Creator Podcast with Prescott Perez-Fox
Online Resources for Creative Professionals to Learn Business Skills - The Busy Creator Podcast 72

The Busy Creator Podcast with Prescott Perez-Fox

Play Episode Listen Later Nov 30, 2015 30:40


This episode is a rundown of resource I've been collecting and analysing for some time. The Busy Creator Podcast is far from the only place online to learn business skills and discuss productivity, so strap yourself in and listen to this collection of 27 online resources where creative pros can learn business skills.   Tools & Website Fizzle, Community The Fizzle Show Seanwes, Community The Podcast Dude Freelancers' Union FU Hives Stack Exchange GraphicDesign.StackExchange.com Lynda LinkedIn acquires Lynda for $1.5B InDesignSecrets.com Skillshare Coursera Udemy Udacity Chip Kidd on Skillshare Jessica Hische on Skillshare Erica Heinz on Skillshare Erica Heinz on The Busy Creator Podcast Courtney Eliseo on Skillshare Courtney Eliseo on The Busy Creator Podcast General Assembly The Flatiron School Digital Strategy School Marie Poulin & Ben Borowski Marie Poulin on The Busy Creator Podcast Ben Borowski on The Busy Creator Podcast Nathalie Lussier 30-Day Listbuilding Challenge Louder Than Ten Marketing Mentor Pitch Perfect Presentation The C Method Christina Canters on The Busy Creator Podcast Prescott Perez-Fox on The C Method Podcast Paper and Oats Guerrilla Freelancing Sidecar Made by SY/Partners 30-Foot Gorilla The Nu School Online Pricing Guide from The Nu School Learn The Secret Handshake HOW Magazine HOW Design Live Communication Arts TRY AUDIBLE.COM FREE FOR 30-DAYS Visit BusyCreatorBook.com for your free trial Get Make It Stick: The Science of Successful Learning by Peter C. Brown as a free audiobook GET THE EPISODE Download The Busy Creator Podcast, episode 72 (MP3, 30:48, 14.8 MB) Download The Busy Creator Podcast, episode 72 (OGG, 30:48, 15.7 MB) SUBSCRIBE TO GET NEW EPISODES   Subscribe to The Busy Creator Podcast on iTunes or on Android

The Hello World Podcast
Episode 42: Kate Gregory

The Hello World Podcast

Play Episode Listen Later Sep 29, 2014 46:23


Kate Gregory has been using C++ since before Microsoft had a C++ compiler, and has been paid to program since 1979. She loves C++ and believes that software should make our lives easier. That includes making the lives of developers easier! She'll stay up late arguing about deterministic destruction or how C++ 11 is not the C++ you remember. Kate runs a small consulting firm in rural Ontario and provides mentoring and management consultant services, as well as writing code every week. She has spoken all over the world, written over a dozen books, and helped thousands of developers to be better at what they do. Kate is a Microsoft Regional Director, and a Visual C++ MVP, an Imagine Cup judge and mentor, and an active contributor to StackOverflow and other StackExchange sites. She develops courses for Pluralsight, primarily on C++ and Visual Studio. In 2014 she was Open Content Chair for CppCon, the largest C++ conference ever held, where she also delivered sessions.

techzing tech podcast
113: TZ Interview - Jeff Atwood / Stack Exchange

techzing tech podcast

Play Episode Listen Later Mar 16, 2011 86:38


Justin and Jason interview Jeff Atwood, co-founder of Stack Overflow and the Stack Exchange network, about how he got started as a coder and his passion for programming and mentoring, how he and Joel Spolsky came up with the idea for Stack Overflow, his belief in free software and the Open ID initiative, the process of raising venture capital for Stack Exchange and his views of entrepreneurship, why he and Joel stopped doing the Stack Overflow podcast and whether they might start up again, and the hardest step when scaling a web app.