POPULARITY
Dr. Alessi welcomes Dr. David Grew, a radiation oncologist from Trinity Health of New England at St. Francis Hospital, to discuss prostate cancer and the use of CyberKnife treatments. He also speaks with Dr. David Kruger, an orthopedic spine surgeon from Advanced Orthopedics New England at St. Francis Hospital, about spine surgery, common conditions, and the training required for his specialty. They also discuss the use of the Mako SmartRobotics system in spine surgery. Additionally, Dr. Alessi reflects on President Biden's prostate cancer diagnosis and Billy Joel's diagnosis of normal pressure hydrocephalus.
Panel moderator Ben Stoller, CEO of Paxxal, is joined by industry experts Kevin Mazula, CEO of RM2; Shawn Stockman, VP of Sustainability for Onepak; and David Kruger, President of TriEnda, in a discussion about the investment climate for sustainable companies with reusable packaging systems.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a Philosophy Fellowship for AI Safety, published by Anders Edson on September 7, 2022 on The Effective Altruism Forum. Overview The Center for AI Safety (CAIS) is announcing the CAIS Philosophy Fellowship, a program for philosophy PhD students and postdoctorates to work on conceptual problems in AI safety. Why Philosophers? Conceptual AI safety researchers aim to help orient the field and clarify its ideas, but in doing so, they must wrestle with imprecise, hard-to-define problems. Part of the difficulty involved in conducting conceptual AI safety research is that it involves abstract thinking about future systems which have yet to be built. Additionally, the concepts involved in conceptual AI safety research (e.g., “power”, “intelligence”, “optimization”, “agency”, etc.) can be particularly challenging to work with. Philosophers specialize in working through such nebulous problems; in fact, many active fields within philosophy present a similar type of conceptual challenge, and have nothing empirical to lean on. As an example, a philosopher may take up the question of whether or not ethical claims can possess truth values. Questions such as this one cannot be approached by looking carefully at the world, making accurate measurements, or monitoring the ethical behavior of real people. Instead, philosophers must grapple with intuitions, introduce multiple perspectives, and provide arguments for selecting between these perspectives. We think that this skill set makes philosophers especially fit for conceptual research. Philosophy has already proven itself to be useful for orienting the field of conceptual AI safety. Many of the foundational arguments for AI-risk were philosophical in nature. (As an example, consider Bostrom's Superintelligence.) More recently, philosophy has proven itself to have direct influence on the important research directions in AI safety. Joseph Carlsmith's work on power-seeking AI, for example, has directly influenced research currently being conducted by Beth Barnes and, separately, Dan Hendrycks. Peter Railton's lectures on AI have provided a compelling justification for further research on cooperative behavior in AI agents. Evan et al.'s exploration into truthful AI prompted more technical works into truthful and honest AI. Since philosophers have historically produced valuable conceptual AI safety work, we believe that introducing more philosophy talent into this research space has the potential to be highly impactful. By offering good incentives, we hope to attract strong philosophy talent with a high likelihood of producing quality conceptual research. The Program Our program will be a paid, in-person opportunity running from January to August 2023. Our ideal candidate is a philosophy PhD student or graduate with an interest in AI safety, exceptional research abilities, demonstrated philosophical rigor, self-motivation, and a willingness to spend time working with more technical subjects. No prior experience in AI or machine learning is necessary for this fellowship. There will be an in-depth onboarding program at the start of the fellowship to get the researchers up to speed on the current state of AI/AI safety. Fellows will receive a $60,000 grant, covered student fees, and a housing stipend to relocate to San Francisco, CA. The program will feature guest lectures from top philosophers and AI safety experts including Nick Bostrom, Peter Railton, Hilary Greaves, Jacob Steinhardt, Rohin Shah, David Kruger, and Victoria Krakovna among others. As an organization that places a high value on good conceptual researchers, we plan on extending full-time employment offers at our organization to top-performing fellows. Additionally, many institutions such as the Center for Human-Compatible AI (UC Berkeley), the Kavli Center for Ethics,...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Fund for Alignment Research (We're Hiring!), published by AdamGleave on July 6, 2022 on The AI Alignment Forum. Cross-posted to the EA Forum The Fund for Alignment Research (FAR) is hiring research engineers and communication specialists to work closely with AI safety researchers. We believe these roles are high-impact, contributing to some of the most interesting research agendas in safety. We also think they offer an excellent opportunity to build skills and connections via mentorship and working closely with researchers at a variety of labs. We were inspired to start FAR after noticing that many academic and independent researchers wanted to work with more collaborators, but often lacked the institutional framework and hiring pipeline to do so. On the other hand, there are a large number of talented people who would like to contribute to AI safety, but lack a stepping stone into the field. Our hope is that FAR can help fill this gap, both directly accelerating valuable research and helping to address the talent pipeline. In the remainder of this post we'll give a bit more information about FAR and our current collaborators, and then summarize our current openings. Please consider applying or forwarding them to a friend who might be interested! We are also actively brainstorming other ways that FAR could be useful to the community. If you have any ideas, we'd love to hear from you! About Us FAR is a non-profit led by Ethan Perez, Adam Gleave, Scott Emmons, and Claudia Shi: a group of AI safety researchers looking to reduce existential risk from artificial intelligence. Ethan recently graduated from the PhD program at New York University, Adam and Scott are PhD candidates at UC Berkeley, and Claudia is a PhD candidate at Columbia University. FAR provides services to AI safety researchers to accelerate their research agendas. We are currently focused on supporting the agendas of Ethan, Adam, Scott, and Claudia. We are also trialing a collaboration with the labs of David Kruger and Jacob Steinhardt, professors at the University of Cambridge and UC Berkeley. Our services are currently provided free-of-charge to recipients out of FAR's general support funding. In the future we plan to charge partners who use large quantities of our services on an at-cost basis. This could be paid for from a partner's existing grant, or we can also assist the partner with fundraising for this purpose. We anticipate supporting many of the same people that BERI currently works with. However, our organisations have differing emphases. First, our core services are different: to a first approximation, BERI provides "operations as a service" whereas FAR provides "a technical team as a service". That is, FAR recruits, manages and trains our own team; whereas BERI hires people primarily at the request of their partner's. Second, FAR works primarily with individual researchers whereas BERI works primarily with entire labs, although this distinction may blur in the future. Finally, FAR is more opinionated than BERI: if we have more demand for our services than our team can support, then we will prioritize based on our internal view of which agendas are most promising. Although FAR is a new organization, our research has already led to a method for learning from language feedback as a data-efficient alternative to RL from human feedback. We have analyzed challenges associated with treating a language model as an RL policy and launched a competition on inverse scaling for language models. We are currently pursuing several other early-stage AI safety projects. Once we have beta-tested our model, we plan to expand the number of partners we work with. Feel free to get in touch at hello@alignmentfund.org if you think you might be a good fit! Operations Manager We are seeking an Operation...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Fund for Alignment Research (We're Hiring!), published by AdamGleave on July 6, 2022 on The Effective Altruism Forum. Cross-posted to LessWrong The Fund for Alignment Research (FAR) is hiring research engineers and communication specialists to work closely with AI safety researchers. We believe these roles are high-impact, contributing to some of the most interesting research agendas in safety. We also think they offer an excellent opportunity to build skills and connections via mentorship and working closely with researchers at a variety of labs. We were inspired to start FAR after noticing that many academic and independent researchers wanted to work with more collaborators, but often lacked the institutional framework and hiring pipeline to do so. On the other hand, there are a large number of talented people who would like to contribute to AI safety, but lack a stepping stone into the field. Our hope is that FAR can help fill this gap, both directly accelerating valuable research and helping to address the talent pipeline. In the remainder of this post we'll give a bit more information about FAR and our current collaborators, and then summarize our current openings. Please consider applying or forwarding them to a friend who might be interested! We are also actively brainstorming other ways that FAR could be useful to the community. If you have any ideas, we'd love to hear from you! About Us FAR is a non-profit led by Ethan Perez, Adam Gleave, Scott Emmons, and Claudia Shi: a group of AI safety researchers looking to reduce existential risk from artificial intelligence. Ethan recently graduated from the PhD program at New York University, Adam and Scott are PhD candidates at UC Berkeley, and Claudia is a PhD candidate at Columbia University. FAR provides services to AI safety researchers to accelerate their research agendas. We are currently focused on supporting the agendas of Ethan, Adam, Scott, and Claudia. We are also trialing a collaboration with the labs of David Kruger and Jacob Steinhardt, professors at the University of Cambridge and UC Berkeley. Our services are currently provided free-of-charge to recipients out of FAR's general support funding. In the future we plan to charge partners who use large quantities of our services on an at-cost basis. This could be paid for from a partner's existing grant, or we can also assist the partner with fundraising for this purpose. We anticipate supporting many of the same people that BERI currently works with. However, our organisations have differing emphases. First, our core services are different: to a first approximation, BERI provides "operations as a service" whereas FAR provides "a technical team as a service". That is, FAR recruits, manages and trains our own team; whereas BERI hires people primarily at the request of their partner's. Second, FAR works primarily with individual researchers whereas BERI works primarily with entire labs, although this distinction may blur in the future. Finally, FAR is more opinionated than BERI: if we have more demand for our services than our team can support, then we will prioritize based on our internal view of which agendas are most promising. Although FAR is a new organization, our research has already led to a method for learning from language feedback as a data-efficient alternative to RL from human feedback. We have analyzed challenges associated with treating a language model as an RL policy and launched a competition on inverse scaling for language models. We are currently pursuing several other early-stage AI safety projects. Once we have beta-tested our model, we plan to expand the number of partners we work with. Feel free to get in touch at hello@alignmentfund.org if you think you might be a good fit! Operations Manager We are seeking an Operat...
David Kruger is the Co-founder and VP of Strategy at Absio, a military-grade data security corporation. He is also the Co-inventor of Absio's patented Software-Defined Distributed Key Cryptography (SDKC). David has over 40 years of experience in technology consulting, business development, and sales strategy. He is a certified General Data Protection Regulation (GDPR) practitioner with knowledge in all aspects of data privacy. In this episode… Data security is becoming more complex, and many companies don't realize that data is a physical substance that can cause damage if it's not appropriately controlled. So, how can you secure your data to ensure widespread protection? For starters, companies should take a proactive rather than reactive approach to controlling data usage. David Kruger's patented technology, Software-Defined Distributed Key Cryptography (SDKC), enables software applications to create the keys needed to encrypt data safely and efficiently. With SDKC, you can store your data and keys in one secure location to seamlessly maintain control over your data. In today's episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with David Kruger, Co-founder and VP of Strategy of Absio, about actively securing your data. David shares his approach to data security, tips for efficient data encryption and decryption, and how his patented Software-Defined Distributed Key Cryptography (SDKC) technology can help companies effectively secure their data.
Is your cyber information secure? Are you aware of what's happening in your cyberspace? Are your passwords now compromised? We had David Kruger, Vice President of Absio talk about the Top 3 things you should about cybersecurity and digital privacy. Find him at: Linkedin: https://www.linkedin.com/in/davidakruger/ Website: https://www.absio.com/ Forbes Council: https://www.forbes.com/sites/forbestechcouncil/people/davidkruger/?sh=2a42b56c131e--- While you're at it, follow us on all social media platforms: Facebook: https://www.facebook.com/AiNerdOfficial Instagram: https://www.instagram.com/ainerdofficial/ Twitter: https://twitter.com/AINerdOfficial LinkedIn: https://www.linkedin.com/company/ainerd Be my next guest! Let's find time at https://calendly.com/instarel/learn-more Don't forget to subscribe to never miss out on our next episode.
Today, our guest David Kruger has a self-proclaimed contrarian view of cybersecurity and how to accomplish cybersecurity overall. He says that he likes to look at the underbelly of the problem and the mathematics behind cybersecurity issues in ways many tend not to look. David's passion lies in designing cybersecurity, and his perspective tends toward how the cyber world could be different from what it is today. David pulls back the misteak and perceived ‘magic' of technology to discuss the reality of what our devices are, a carefully crafted machine that does what it's told to do, good or bad. Come theorize with us on what the future of cyber-security could look like. Today's show will interest you if you are analytical-minded. Visit our sponsors: BlockFrame Inc. SecureSet Academy Murray Security Services
David Kruger is the VP of strategy and co-founder of Abiso and uses his knowledge to weigh in on data hardening and its impact on identity and access management. In this episode of Cybersecurity Unplugged, Kruger discusses military intelligence and Abiso: Creating a new tactical battlefield communication system, the concept of data ownership vs. the information of data, and the future of the software engineering community and the movement of the manufacturing process.
Debbie Reynolds, "The Data Diva,” talks to David Kruger VP, Strategy, Co-Founder of Absio Corporation co-creator of Software-defined Distributed Key Cryptography.We discuss David's technology background, the problem of data control and data leakage, the history of metadata and true data insights, changing the ways that we think about data protection, Managing 3rd party data risk using software, protecting data using location restrictions, the impact of the invalidation of The EU-US and Swiss Privacy Shield on data movements and security, The ability to store cryptography keys anywhere in the world, what is not being discussed and should be about data privacy and data protection, data control and data as a property rights, redress for breaches is difficult, creeping data seepage, need for technology that engenders trust and his wish for data privacy in the future.
David Kruger shares his experiences as a Gideon and helps us understand the Gideon mission. March 21st, 2021
This episode is a special live recording from TF4, our most recent SOLD OUT TF Blockchain Conference held on November 14, 2019 in Seattle at the Triple Door. In this episode, our Seattle Chapter Director Paul Rapino leads a panel discussing “Record Management” with Kapi Attawar of Samsung SDS, Robert Mao of ArcBlock, David Kruger of Absio, and Dr. Setrag Khoshafian of Pegasystems. Record Management involves how data is recorded and accessed on blockchain. This was a very engaging panel at TF4. Hope you enjoy it.
On this episode of Special criminals, we will be discussing the case of Peter Woodcock, a man convicted of murder, who was actually officially diagnosed as a psychopath. This episode does contain violence against children, so please be warned. Find us on social media! Search for Special Criminals.Also, please rate and review! It helps us out a ton! We appreciate you!
Pleasant Valley Rotarian, and 2015 winner of the Jeffrey Keahon Foundation Award, David Kruger describes the work of The Rotary Foundation, the part of Rotary that not only does good in the world away from our communities, but also helps fund many local projects. While its most notable success has been the worldwide reduction of polio to a handful of cases in two countries—with complete elimination on the horizon—The Rotary Foundation may have saved even more lives with its projects for clean water, maternal and child health, and peace. It is also the most effective charity because nearly all of every dollar given to The Rotary Foundation is used for its work instead of spent on administration or fundraising. --- Support this podcast: https://anchor.fm/radiorotary/support
Michael Christophides, Chief Inspector and Laboratory Director of Granit Inspection Group, joins co-host Sarah O’Connell and guest co-host and physicist David Kruger to describe the hazards of the gas radon, which can accumulate in buildings or in well water. Radon is an odorless radioactive gas that is released from rocks of all kinds, but especially from granite or dark shales found in the Hudson Valley. Radon in the air is the second leading cause of lung cancer (and adds to the risk of cancer in smokers), while radon in water can increase the risk of stomach cancer and may also be released into the air. Testing is the only way that you can tell whether or not radon is present in dangerous levels. Mr. Christophides recommends testing every two years, since levels can change based on seismic activity, changes in structures, or other factors. If a test reveals excess radon, remediation usually consists of suctioning air away before it can enter the house. --- Support this podcast: https://anchor.fm/radiorotary/support
In this episode: David Kruger, co-founder of Absio is our feature guest this week. News from: Hyperloop, Amazon, SendGrid, CU, CTA, Alchemy Security, Red Canary, InteliSecure, Optiv, root9B, CableLabs, and a lot more! Full details here: https://www.colorado-security.com/news/2017/11/13/42-1120-david-kruger-co-founder-of-absio Come Slack with us Please come join us on the new Colorado = Security Slack channel to meet old and new friends. Lots of news this week. Hyperloop is coming to town, Amazon HQ2 proposal details are out, SendGrid goes public, Colorado security workers are in high demand, CU students are learning security, the CTA offer scholarships to Colorado High School students, Red Canary's boss gets profiled, InteliSecure is still growing fast, Optiv makes a pair of acquisitions, Brian Krebs tears into root9B, and how to make your own LTE network. Did you catch our trivia question? Be the first to reply to info@colorado-security.com with the right answer and get any $25 item from the Colorado = Security store. Feature interview: David Kruger is co-founder of Absio and came to information and data security differently than many of us. David is an entrepreneur and co-founded Absio with his twin brother. They looked at the root cause of data loss and then used that root cause analysis to find potential solutions to the problem. Enjoy this great discussion about data security and getting to the root of the problem. Sign up for our mailing list on the main site to receive weekly updates - https://www.colorado-security.com/. If you have any questions or comments, or any organizations or events we should highlight, contact Alex and Robb at info@colorado-security.com Local security news: Colorado = Security store! Buy things now Join the Colorado = Security Slack channel Hyperloop coming to Denver Colorado releases its proposal pitching the Denver area as Amazon's second headquarters Denver not a top-five choice for Amazon HQ2, Wall Street Journal says, despite breweries SendGrid's public debut sees stock price jump 16 percent at opening Colorado cybersecurity workers in high demand CU Boulder students flex their cybersecurity muscles Colorado Technology Association, Colorado Succeeds and Silicon STEM Academy Offer Computer Science Scholarships for Colorado High School Students Alchemy Security leader advises congress Thought Leader: Brian Beyer’s Red Canary blends tech with family-business service InteliSecure ranked #6 on Denver Business Journal 2017 Fast 50 Optiv Security Acquires Decision Lab Optiv Security Acquires Conexsys Krebs on Security - root9B we hardly knew ya root9B Holdings Announces Discontinuation of Operations How to build your own LTE network Job Openings: LenderLive - Chief Information Security Officer (CISO) DISH - IT Security Manager Comcast - Manager 1, Security Incident Response CA Technologies - Senior Cybersecurity Engineer Wells Fargo - Systems Architect 5 - Payment System Security GuidePoint Security - vSOC Cyber Threat Hunter PwC - Cyber Risk Experienced Associate Pearson - Information Security Internship (CISO Business Operations) Level 3 Communications - INTERN - INROADS Cyber Security Optiv - Executive Vice President, Security Services and Solutions Upcoming Events: This Week and Next: Optiv - 2017 Solution and Program Insight Focus Group: Application Security (AppSec) - 11/26 Other Notable Upcoming Events: SnowFROC - 3/8 Rocky Mountain Information Security Confernce - 5/8-10 View our events page for a full list of upcoming events * Thanks to CJ Adams for our intro and exit! If you need any voiceover work, you can contact him here at carrrladams@gmail.com. Check out his other voice work here. * Intro and exit song: "The Language of Blame" by The Agrarians is licensed under CC BY 2.0
Interview de David Kruger, la voix française de Masterchief dans Halo
In this podcast Carol and Katie talk about women’s relationship to money in order to prep you for the upcoming webinar: A Woman’s Money Story with special guest Dr. David Kruger, MD. Register for the Webinar here. They also talk about the book Happy Money- The Science of Happier Spending by authors Elizabeth Dunn and […] The post Happy Money – Podcast 61 appeared first on Skirt Strategies.
In this episode we talk to David Kruger, president of The Aircraft Partnership Association. He describes how pilots can find other pilots in their area which might be a good fit for a joint aircraft ownership group.