Podcasts about narrow ai

  • 63PODCASTS
  • 71EPISODES
  • 34mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 21, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about narrow ai

Latest podcast episodes about narrow ai

52 Weeks of Cloud
The Toyota Way: Engineering Discipline in the Era of Dangerous Dilettantes

52 Weeks of Cloud

Play Episode Listen Later May 21, 2025 14:38


Dangerous Dilettantes vs. Toyota Way EngineeringCore ThesisThe influx of AI-powered automation tools creates dangerous dilettantes - practitioners who know just enough to be harmful. The Toyota Production System (TPS) principles provide a battle-tested framework for integrating automation while maintaining engineering discipline.Historical ContextToyota Way formalized ~2001DevOps principles derive from TPSCoincided with post-dotcom crash startupsDecades of manufacturing automation parallels modern AI-based automationDangerous Dilettante IndicatorsPromises magical automation without understanding systemsFocuses on short-term productivity gains over long-term stabilityCreates interfaces that hide defects rather than surfacing themLacks understanding of production engineering fundamentalsPrioritizes feature velocity over deterministic behaviorToyota Way Implementation for AI-Enhanced Development1. Long-Term Philosophy Over Short-Term Gains// Anti-pattern: Brittle automation scriptlet quick_fix = agent.generate_solution(problem, { optimize_for: "immediate_completion", validation: false});// TPS approach: Sustainable system designlet sustainable_solution = engineering_system .with_agent_augmentation(agent) .design_solution(problem, { time_horizon_years: 2, observability: true, test_coverage_threshold: 0.85, validate_against_principles: true });Build systems that remain maintainable across yearsEstablish deterministic validation criteria before implementationOptimize for total cost of ownership, not just initial development2. Create Continuous Process Flow to Surface ProblemsImplement CI pipelines that surface defects immediately:Static analysis validationType checking (prefer strong type systems)Property-based testingIntegration testsPerformance regression detectionBuild flow:make lint → make typecheck → make test → make integration → make benchmarkFail fast at each stageForce errors to surface early rather than be hidden by automationAgent-assisted development must enhance visibility, not obscure it3. Pull Systems to Prevent OverproductionMinimize code surface area - only implement what's neededPrefer refactoring to adding new abstractionsUse agents to eliminate boilerplate, not to generate speculative features// Prefer minimal implementationsfunction processData(data: T[]): Result { // Use an agent to generate only the exact transformation needed // Not to create a general-purpose framework}4. Level Workload (Heijunka)Establish consistent development velocityAvoid burst patterns that hide technical debtUse agents consistently for small tasks rather than large sporadic generations5. Build Quality In (Jidoka)Automate failure detection, not just productionAny failed test/lint/check = full system haltEvery team member empowered to "pull the andon cord" (stop integration)AI-assisted code must pass same quality gates as human codeQuality gates should be more rigorous with automation, not less6. Standardized Tasks and ProcessesUniform build system interfaces across projectsConsistent command patterns:make formatmake lintmake testmake deployStandardized ways to integrate AI assistanceDocumented patterns for human verification of generated code7. Visual Controls to Expose ProblemsDashboards for code coverageComplexity metricsDependency trackingPerformance telemetryUse agents to improve these visualizations, not bypass them8. Reliable, Thoroughly-Tested TechnologyPrefer languages with strong safety guarantees (Rust, OCaml, TypeScript over JS)Use static analysis tools (clippy, eslint)Property-based testing over example-based#[test]fn property_based_validation() { proptest!(|(input: Vec)| { let result = process(&input); // Must hold for all inputs assert!(result.is_valid_state()); });}9. Grow Leaders Who Understand the WorkEngineers must understand what agents produceNo black-box implementationsLeaders establish a culture of comprehension, not just completion10. Develop Exceptional TeamsUse AI to amplify team capabilities, not replace expertiseAgents as team members with defined responsibilitiesCross-training to understand all parts of the system11. Respect Extended Network (Suppliers)Consistent interfaces between systemsWell-documented APIsVersion guaranteesExplicit dependencies12. Go and See (Genchi Genbutsu)Debug the actual system, not the abstractionTrace problematic code pathsVerify agent-generated code in contextSet up comprehensive observability// Instrument code to make the invisible visiblefunc ProcessRequest(ctx context.Context, req *Request) (*Response, error) { start := time.Now() defer metrics.RecordLatency("request_processing", time.Since(start)) // Log entry point logger.WithField("request_id", req.ID).Info("Starting request processing") // Processing with tracing points // ... // Verify exit conditions if err != nil { metrics.IncrementCounter("processing_errors", 1) logger.WithError(err).Error("Request processing failed") } return resp, err}13. Make Decisions Slowly by ConsensusMulti-stage validation for significant architectural changesAutomated analysis paired with human reviewDesign documents that trace requirements to implementation14. Kaizen (Continuous Improvement)Automate common patterns that emergeRegular retrospectives on agent usageContinuous refinement of prompts and integration patternsTechnical Implementation PatternsAI Agent Integrationinterface AgentIntegration { // Bounded scope generateComponent(spec: ComponentSpec): Promise; // Surface problems validateGeneration(code: string): Promise; // Continuous improvement registerFeedback(generation: string, feedback: Feedback): void;}Safety Control SystemsRate limitingProgressive exposureSafety boundariesFallback mechanismsManual oversight thresholdsExample: CI Pipeline with Agent Integration# ci-pipeline.ymlstages: - lint - test - integrate - deploylint: script: - make format-check - make lint # Agent-assisted code must pass same checks - make ai-validation test: script: - make unit-test - make property-test - make coverage-report # Coverage thresholds enforced - make coverage-validation# ...ConclusionAgents provide useful automation when bounded by rigorous engineering practices. The Toyota Way principles offer proven methodology for integrating automation without sacrificing quality. The difference between a dangerous dilettante and an engineer isn't knowledge of the latest tools, but understanding of fundamental principles that ensure reliable, maintainable systems.

52 Weeks of Cloud
DevOps Narrow AI Debunking Flowchart

52 Weeks of Cloud

Play Episode Listen Later May 16, 2025 11:19


Extensive Notes: The Truth About AI and Your Coding JobTypes of AINarrow AINot truly intelligentPattern matching and full text searchExamples: voice assistants, coding autocompleteUseful but contains bugsMultiple narrow AI solutions compound bugsGet in, use it, get out quicklyAGI (Artificial General Intelligence)No evidence we're close to achieving thisMay not even be possibleWould require human-level intelligenceNeeds consciousness to existConsciousness: ability to recognize what's happening in environmentNo concept of this in narrow AI approachesPure fantasy and magical thinkingASI (Artificial Super Intelligence)Even more fantasy than AGINo evidence at all it's possibleMore science fiction than realityThe DevOps Flowchart TestCan you explain what DevOps is?If no → You're incompetent on this topicIf yes → Continue to next questionDoes your company use DevOps?If no → You're inexperienced and a magical thinkerIf yes → Continue to next questionWhy would you think narrow AI has any form of intelligence?Anyone claiming AI will automate coding jobs while understanding DevOps is likely:A magical thinkerUnaware of scientific processA grifterWhy DevOps MattersProven methodology similar to Toyota WayBased on continuous improvement (Kaizen)Look-and-see approach to reducing defectsConstantly improving build systems, testing, lintingNo AI component other than basic statistical analysisFeedback loop that makes systems betterThe Reality of Job AutomationPeople who do nothing might be eliminatedNot AI automating a job if they did nothingWorkers who create negative valuePeople who create bugs at 2AMTheir elimination isn't AI automationMeasuring Software QualityHigh churn files correlate with defectsConstant changes to same file indicate not knowing what you're doingDevOps patterns help identify issues through:Tracking file changesMeasuring complexityCode coverage metricsDeployment frequencyConclusionVery early stages of combining narrow AI with DevOpsNarrow AI tools are useful but limitedNeed to look beyond magical thinkingOpinions don't matter if you:Don't understand DevOpsDon't use DevOpsClaim to understand DevOps but believe narrow AI will replace developersRaw AssessmentIf you don't understand DevOps → Your opinion doesn't matterIf you understand DevOps but don't use it → Your opinion doesn't matterIf you understand and use DevOps but think AI will automate coding jobs → You're likely a magical thinker or grifter

Building Better Managers
AI: A Game Changer for Coaches with David Evans (Encore) | Ep #118

Building Better Managers

Play Episode Listen Later Apr 22, 2025 33:46


In this encore episode, host Wendy Hanson engages with David Evans, VP of Product at New Level Work, to explore the transformative role of artificial intelligence (AI) in leadership development. They discuss the definitions of AI, its applications in product development, and successful integrations in the industry. David shares insights on how AI can enhance coaching, streamline administrative tasks, and foster stronger human relationships. Key takeaways: AI is not a replacement but a tool for enhancement. Understanding AI is crucial for future job security. AI can create value by helping leaders thrive. Narrow AI focuses on specific tasks, while AGI aims for human-like intelligence. AI can recommend personalized learning and coaching options. Generative AI can assist in content creation and marketing. AI can help streamline administrative tasks for coaches. Leadership Oracle provides tailored answers based on best practices. AI can analyze 360 data for personalized development plans. Ethics and privacy are paramount in AI development. Meet David: David Evans is the Vice President of Product at New Level Work (formerly BetterManager). In this role, David is responsible for overseeing the product vision and strategy for the company, helping it to achieve its mission of making thriving at work the norm by developing better leaders. Prior to joining New Level Work as VP Product, David led Talent Success at Amplitude where he supported the growth of managers and their teams, scaling company culture as the business scaled. David has coached tech leaders and their teams for over 15 years, from baby startups to Fortune 100s, through capital raisings, M&A transactions, and public listings.David was previously Technical Director at Adopt-a-Pet.com and Founder & CEO of two tech companies, which resulted in one exit and a 9-figure acquisition deal that went south; life's an adventure! David's biggest and most important growth challenge so far: co-parenting 2 young kids. LinkedIn: https://www.linkedin.com/in/jeditrainer/ Email: david.evans@newlevelwork.com Do you enjoy our show? One of the best ways to help us out is leave a 5-star review on your platform of choice! It's easy - just go here: https://www.newlevelwork.com/review For more information, please visit the New Level Work website. https://www.newlevelwork.com/category/podcast © 2019 - 2025 New Level Work

Chrisman Commentary - Daily Mortgage News
2.28.25 Prices on the Up; PMSI's Dan Thompson and Rhonda Gallion on Narrow AI; PCE as Expected

Chrisman Commentary - Daily Mortgage News

Play Episode Listen Later Feb 28, 2025 27:02 Transcription Available


Welcome to The Chrisman Commentary, your go-to daily mortgage news podcast, where industry insights meet expert analysis. Hosted by Robbie Chrisman, this podcast delivers the latest updates on mortgage rates, capital markets, and the forces shaping the housing finance landscape. Whether you're a seasoned professional or just looking to stay informed, you'll get clear, concise breakdowns of market trends and economic shifts that impact the mortgage world.In today's episode, we look at why prices seem like they continue to rise across the board. Plus, Robbie sits down with Preferred Mortgage Services' Dan Thompson and Rhonda Gallion to talk about how Narrow AI is solving the billion-dollar cash flow shortfall for the mortgage ecosystem. And he closes by discussing the latest PCE inflation and core PCE figures.Sagent powers banks and lenders to make loans and homeownership simpler and safer for millions of consumers. We bring the modern experience customers now expect from loan originations to loan servicing, where lifetime customer relationships are managed and grown. Sagent platforms let consumers manage their home-owning lives from anywhere while giving servicers lower costs, scale compliance, and higher servicing values through full market cycles.

Bitcoin for Millennials
Bitcoin is the Greatest Wealth Opportunity for Millennials | AmericanHODL | BFM121

Bitcoin for Millennials

Play Episode Listen Later Jan 27, 2025 97:35


AmericanHODL is a Bitcoin OG and prominent figure in the Bitcoin community, known for his strong advocacy, philosophical takes, and memes. › Follow HODL: https://x.com/americanhodl8 SPONSORS

We Are, Marketing Happy - A Healthcare Marketing Podcast
AI For Healthcare Marketers: Part 1 of 3

We Are, Marketing Happy - A Healthcare Marketing Podcast

Play Episode Listen Later Jan 3, 2025 25:51


In the first episode of our three-part AI for health marketers series, we break down the basics of AI and how healthcare marketers can use it effectively. Jenny discusses how AI has rapidly evolved - ChatGPT is just two years old, with 56% of adults having used it and explains the types of AI, including Narrow AI (used today), General AI, and Superintelligent AI. Jenny also covers how AI learns through methods like supervised, unsupervised, and reinforcement learning, and highlights its key capabilities, such as natural language processing and computer vision. The episode introduces top platforms like ChatGPT, Gemini, Copilot, Perplexity, and Claude, breaking down what makes each unique. By the end, you'll have a clear understanding of the leading AI tools and how they can be utilized. Connect with Jenny: •Email: jenny@hedyandhopp.com •LinkedIn: https://www.linkedin.com/in/jennybristow/   If you enjoyed this episode we'd love to hear your feedback! Please consider leaving us a review on your preferred listening platform and sharing it with others.

Health Hats, the Podcast
From Dick Tracy to AI: Out of Mind to Beyond Mind

Health Hats, the Podcast

Play Episode Listen Later Dec 19, 2024


  Demystify AI's evolution, from Netflix recommendations to ChatGPT, exploring how neural networks learn & why even AI creators can't fully explain how it works. Summary Claude AI used in this summary

Health Hats, the Podcast
AI: Neither Artificial nor Intelligent. Useful and Sobering

Health Hats, the Podcast

Play Episode Listen Later Nov 3, 2024 16:15


What kind of Artificial Intelligence does Health Hats, the Podcast, use in production? Understanding types of AI, transparency, and ethical considerations. Summary Perplexity used in this summary AI Tools in Use Various AI-powered software and apps are utilized in production, including Zoom, Descript, Grammarly, DaVinci Resolve, Canva, Perplexity, and OpenArt AI. Types of AI The episode breaks down different categories of AI, including Narrow AI, Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). AI functionalities are explained, from Reactive Machine AI to the theoretical Self-Aware AI. Ethical Considerations Transparency and disclosure of AI usage in content creation Maintaining authenticity and human creativity Ensuring content accuracy and preventing misinformation Addressing bias and fairness in AI algorithms Protecting user privacy and data Ensuring Transparency Disclosing AI usage in audio content and metadata Clear communication with the audience about AI utilization Appropriate use of AI as a tool to enhance, not replace, human creativity Verifying and fact-checking AI-generated content The episode emphasizes the importance of using AI responsibly to enhance the podcasting experience while maintaining integrity, authenticity, and trust with the audience. Click here to view the printable newsletter with images. More readable than a transcript, which can also be found below. Contents Table of Contents Toggle EpisodeProemAI used in Health Hats ProductionAI in Podcast Production According to Health HatsAI in Content Creation and EditingAudio ProcessingVideo EditingAI for Content EnhancementTranscription and SubtitlingContent GenerationAI for Audience Engagement and AnalyticsPersonalizationAnalytics and InsightsTypes of AI Based on CapabilitiesNarrow AI (Weak AI)Artificial General Intelligence (AGI)Artificial Superintelligence (ASI)Types of AI Based on FunctionalityReactive Machine AILimited Memory AITheory of Mind AISelf-Aware AIAI Ethical ConsiderationsTransparency and DisclosureMaintaining AuthenticityContent Accuracy and MisinformationBias and FairnessPrivacy and Data ProtectionJob Displacement ConcernsClimate and Resource ImpactTransparencyDisclosure RequirementsClear CommunicationAppropriate AI UsageContent VerificationOngoing EvaluationTransparency: AI Notice in Health Hats, the Podcast Show NotesAI Notice for Health Hats, the PodcastReflectionRelated episodes from Health Hats Please comment and ask questions: at the comment section at the bottom of the show notes on LinkedIn  via email YouTube channel  DM on Instagram, Twitter, TikTok to @healthhats Production Team Kayla Nelson: Web and Social Media Coach, Dissemination, Help Desk  Leon van Leeuwen: article-grade transcript editing  Oscar van Leeuwen: video editing Julia Higgins: Digital marketing therapy Steve Heatherington: Help Desk and podcast production counseling Joey van Leeuwen, Drummer, Composer, and Arranger, provided the music for the intro, outro, proem, and reflection, including Moe's Blues for Proem and Reflection and Bill Evan's Time Remembered for on-mic clips. Podcast episodes on YouTube from Podcast. Inspired by and Grateful to  Amy Price, Fred Trotter, Dave deBronkart, Eric Pinaud, Emily Hadley, Laura Marcial, James Cummings, Ken Goodman Links and references https://conversational-leadership.net/quotation/ai-is-neither-artificial-not-intelligent/ AI in Podcasting: Transforming Podcast with AI Technology Best AI tools for podcasts AI in Podcasting: A Guide for Brand Marketers https://www.carmatec.com/blog/ai-in-media-and-entertainment-complete-guide/ AI in Media and Entertainment Complete Guide Episode Proem No surprise, I use Artificial Intelligence in my podcast production. As an early adopter of technology,

YAP - Young and Profiting
Mustafa Suleyman: Harnessing AI to Transform Work, Business, and Innovation | E314

YAP - Young and Profiting

Play Episode Listen Later Oct 28, 2024 74:30


At just 11 years old, Mustafa Suleyman started buying and reselling candy for profit in his modest London neighborhood. Many years later, he co-founded one of the most groundbreaking AI companies, DeepMind, which Google later acquired for £400 million. But while at Google, Mustafa felt things were moving too slowly with LaMDA, an AI project that eventually became Gemini. Convinced that the technology was ready for real-world impact, he left to co-found Inflection AI, aiming to build technology that feels natural and human. In this episode, Mustafa shares insights on how AI is quickly changing how we work and live, the challenges of using it responsibly, and what the future might hold. In this episode, Hala and Mustafa will discuss:  - The ethical challenges of AI development - How AI can be misused when in the wrong hands - AI: a super-intelligent aid at your fingertips - Why personalized AI companions are the future - Could AI surpass human intelligence? - Narrow AI vs. Artificial General Intelligence (AGI) - How Microsoft Copilot is transforming the future of work - A level playing field for everyone - How AI can transform entrepreneurship - How AI will replace routine jobs and enable creativity Mustafa Suleyman is the CEO of Microsoft AI and co-founder of DeepMind, one of the world's leading artificial intelligence companies, now owned by Google. In 2022, he co-founded Inflection AI, which aims to create AI tools that help people interact more naturally with technology. An outspoken advocate for AI ethics, he founded the DeepMind Ethics & Society team to study the impact of AI on society. Mustafa is also the author of The Coming Wave, which explores how AI will shape the future of society and global systems. His work has earned him recognition as one of Time magazine's 100 most influential people in AI in both 2023 and 2024. Connect with Mustafa: Mustafa's LinkedIn: https://www.linkedin.com/in/mustafa-suleyman/  Mustafa's Twitter: https://x.com/mustafasuleyman Resources Mentioned: Mustafa's book, The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma: https://www.amazon.com/Coming-Wave-Technology-Twenty-first-Centurys/dp/0593593952  Inflection AI: https://inflection.ai/  LinkedIn Secrets Masterclass, Have Job Security For Life: Use code ‘podcast' for 30% off at yapmedia.io/course. Top Tools and Products of the Month: https://youngandprofiting.com/deals/  More About Young and Profiting Download Transcripts - youngandprofiting.com Get Sponsorship Deals - youngandprofiting.com/sponsorships Leave a Review - ratethispodcast.com/yap Watch Videos - youtube.com/c/YoungandProfiting   Follow Hala Taha LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ TikTok - tiktok.com/@yapwithhala Twitter - twitter.com/yapwithhala   Learn more about YAP Media's Services - yapmedia.io/

AI, Government, and the Future by Alan Pentz
The Path to Human-Level AI: Insights from AGI Pioneer Peter Voss of Aigo.ai

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Jul 10, 2024 37:14


In this episode of AI, Government, and the Future, host Max Romanik is joined by Peter Voss, CEO of iGo.ai and a pioneer in artificial general intelligence (AGI). Peter discusses the critical differences between narrow AI and AGI, exploring how AGI aims to overcome the limitations of current AI technologies. He shares his vision for cognitive AI architectures, the potential impact of AGI on various industries, and his perspectives on AI regulation and development. Peter also addresses concerns about AI safety and emphasizes the need for a balanced view in discussions about AI's future.

The Foresight Institute Podcast
Existential Hope Podcast: Roman Yampolskiy | The Case for Narrow AI

The Foresight Institute Podcast

Play Episode Listen Later Jun 26, 2024 47:08


Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.Session SummaryWe discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.

Mint Techcetra
India's digital shake-up and Apple's Innovations- What's happening??

Mint Techcetra

Play Episode Listen Later Jun 14, 2024 30:56


In this latest episode of Mint Techcetra, we will explore India's tech policy shake-up with the new government in terms of the Digital Protection Acts and ambitious missions in AI, semiconductors, and quantum computing. Dive into Apple's AI breakthroughs, where on-device models enhance privacy and user experience and get to know about how Tim Cook is integrating AI seamlessly into the Apple ecosystem with the advent of 3 billion parameter small language or narrow AI model locally on every iPhone. Get the latest on India's digital transformation and Apple's innovations. Tune in now!

The Nonlinear Library
AF - Modern Transformers are AGI, and Human-Level by Abram Demski

The Nonlinear Library

Play Episode Listen Later Mar 26, 2024 5:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Modern Transformers are AGI, and Human-Level, published by Abram Demski on March 26, 2024 on The AI Alignment Forum. This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making. In the AI x-risk community, I think there is a tendency to ask people to estimate "time to AGI" when what is meant is really something more like "time to doom" (or, better, point-of-no-return). For about a year, I've been answering this question "zero" when asked. This strikes some people as absurd or at best misleading. I disagree. The term "Artificial General Intelligence" (AGI) was coined in the early 00s, to contrast with the prevalent paradigm of Narrow AI. I was getting my undergraduate computer science education in the 00s; I experienced a deeply-held conviction in my professors that the correct response to any talk of "intelligence" was "intelligence for what task?" -- to pursue intelligence in any kind of generality was unscientific, whereas trying to play chess really well or automatically detect cancer in medical scans was OK. I think this was a reaction to the AI winter of the 1990s. The grand ambitions of the AI field, to create intelligent machines, had been discredited. Automating narrow tasks still seemed promising. "AGI" was a fringe movement. As such, I do not think it is legitimate for the AI risk community to use the term AGI to mean 'the scary thing' -- the term AGI belongs to the AGI community, who use it specifically to contrast with narrow AI. Modern Transformers[1] are definitely not narrow AI. It may have still been plausible in, say, 2019. You might then have argued: "Language models are only language models! They're OK at writing, but you can't use them for anything else." It had been argued for many years that language was an AI complete task; if you can solve natural-language processing (NLP) sufficiently well, you can solve anything. However, in 2019 it might still be possible to dismiss this. Basically any narrow-AI subfield had people who will argue that that specific subfield is the best route to AGI, or the best benchmark for AGI. The NLP people turned out to be correct. Modern NLP systems can do most things you would want an AI to do, at some basic level of competence. Critically, if you come up with a new task[2], one which the model has never been trained on, then odds are still good that it will display at least middling competence. What more could you reasonably ask for, to demonstrate 'general intelligence' rather than 'narrow'? Generative pre-training is AGI technology: it creates a model with mediocre competence at basically everything. Furthermore, when we measure that competence, it usually falls somewhere within the human range of performance. So, as a result, it seems sensible to call them human-level as well. It seems to me like people who protest this conclusion are engaging in goalpost-moving. More specifically, it seems to me like complaints that modern AI systems are "dumb as rocks" are comparing AI-generated responses to human experts. A quote from the dumb-as-rocks essay: GenAI also can't tell you how to make money. One man asked GPT-4 what to do with $100 to maximize his earnings in the shortest time possible. The program had him buy a domain name, build a niche affiliate website, feature some sustainable products, and optimize for social media and search engines. Two months later, our entrepreneur had a moribund website with one comment and no sales. So genAI is bad at business. That's a bit of a weak-man argument (I specifically searched for "generative ai is dumb as rocks what are we doing"). But it does demonstrate a pat...

The Nonlinear Library
LW - Modern Transformers are AGI, and Human-Level by abramdemski

The Nonlinear Library

Play Episode Listen Later Mar 26, 2024 5:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Modern Transformers are AGI, and Human-Level, published by abramdemski on March 26, 2024 on LessWrong. This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making. In the AI x-risk community, I think there is a tendency to ask people to estimate "time to AGI" when what is meant is really something more like "time to doom" (or, better, point-of-no-return). For about a year, I've been answering this question "zero" when asked. This strikes some people as absurd or at best misleading. I disagree. The term "Artificial General Intelligence" (AGI) was coined in the early 00s, to contrast with the prevalent paradigm of Narrow AI. I was getting my undergraduate computer science education in the 00s; I experienced a deeply-held conviction in my professors that the correct response to any talk of "intelligence" was "intelligence for what task?" -- to pursue intelligence in any kind of generality was unscientific, whereas trying to play chess really well or automatically detect cancer in medical scans was OK. I think this was a reaction to the AI winter of the 1990s. The grand ambitions of the AI field, to create intelligent machines, had been discredited. Automating narrow tasks still seemed promising. "AGI" was a fringe movement. As such, I do not think it is legitimate for the AI risk community to use the term AGI to mean 'the scary thing' -- the term AGI belongs to the AGI community, who use it specifically to contrast with narrow AI. Modern Transformers[1] are definitely not narrow AI. It may have still been plausible in, say, 2019. You might then have argued: "Language models are only language models! They're OK at writing, but you can't use them for anything else." It had been argued for many years that language was an AI complete task; if you can solve natural-language processing (NLP) sufficiently well, you can solve anything. However, in 2019 it might still be possible to dismiss this. Basically any narrow-AI subfield had people who will argue that that specific subfield is the best route to AGI, or the best benchmark for AGI. The NLP people turned out to be correct. Modern NLP systems can do most things you would want an AI to do, at some basic level of competence. Critically, if you come up with a new task[2], one which the model has never been trained on, then odds are still good that it will display at least middling competence. What more could you reasonably ask for, to demonstrate 'general intelligence' rather than 'narrow'? Generative pre-training is AGI technology: it creates a model with mediocre competence at basically everything. Furthermore, when we measure that competence, it usually falls somewhere within the human range of performance. So, as a result, it seems sensible to call them human-level as well. It seems to me like people who protest this conclusion are engaging in goalpost-moving. More specifically, it seems to me like complaints that modern AI systems are "dumb as rocks" are comparing AI-generated responses to human experts. A quote from the dumb-as-rocks essay: GenAI also can't tell you how to make money. One man asked GPT-4 what to do with $100 to maximize his earnings in the shortest time possible. The program had him buy a domain name, build a niche affiliate website, feature some sustainable products, and optimize for social media and search engines. Two months later, our entrepreneur had a moribund website with one comment and no sales. So genAI is bad at business. That's a bit of a weak-man argument (I specifically searched for "generative ai is dumb as rocks what are we doing"). But it does demonstrate a pattern I've enco...

The Nonlinear Library: LessWrong
LW - Modern Transformers are AGI, and Human-Level by abramdemski

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 26, 2024 5:00


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Modern Transformers are AGI, and Human-Level, published by abramdemski on March 26, 2024 on LessWrong. This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making. In the AI x-risk community, I think there is a tendency to ask people to estimate "time to AGI" when what is meant is really something more like "time to doom" (or, better, point-of-no-return). For about a year, I've been answering this question "zero" when asked. This strikes some people as absurd or at best misleading. I disagree. The term "Artificial General Intelligence" (AGI) was coined in the early 00s, to contrast with the prevalent paradigm of Narrow AI. I was getting my undergraduate computer science education in the 00s; I experienced a deeply-held conviction in my professors that the correct response to any talk of "intelligence" was "intelligence for what task?" -- to pursue intelligence in any kind of generality was unscientific, whereas trying to play chess really well or automatically detect cancer in medical scans was OK. I think this was a reaction to the AI winter of the 1990s. The grand ambitions of the AI field, to create intelligent machines, had been discredited. Automating narrow tasks still seemed promising. "AGI" was a fringe movement. As such, I do not think it is legitimate for the AI risk community to use the term AGI to mean 'the scary thing' -- the term AGI belongs to the AGI community, who use it specifically to contrast with narrow AI. Modern Transformers[1] are definitely not narrow AI. It may have still been plausible in, say, 2019. You might then have argued: "Language models are only language models! They're OK at writing, but you can't use them for anything else." It had been argued for many years that language was an AI complete task; if you can solve natural-language processing (NLP) sufficiently well, you can solve anything. However, in 2019 it might still be possible to dismiss this. Basically any narrow-AI subfield had people who will argue that that specific subfield is the best route to AGI, or the best benchmark for AGI. The NLP people turned out to be correct. Modern NLP systems can do most things you would want an AI to do, at some basic level of competence. Critically, if you come up with a new task[2], one which the model has never been trained on, then odds are still good that it will display at least middling competence. What more could you reasonably ask for, to demonstrate 'general intelligence' rather than 'narrow'? Generative pre-training is AGI technology: it creates a model with mediocre competence at basically everything. Furthermore, when we measure that competence, it usually falls somewhere within the human range of performance. So, as a result, it seems sensible to call them human-level as well. It seems to me like people who protest this conclusion are engaging in goalpost-moving. More specifically, it seems to me like complaints that modern AI systems are "dumb as rocks" are comparing AI-generated responses to human experts. A quote from the dumb-as-rocks essay: GenAI also can't tell you how to make money. One man asked GPT-4 what to do with $100 to maximize his earnings in the shortest time possible. The program had him buy a domain name, build a niche affiliate website, feature some sustainable products, and optimize for social media and search engines. Two months later, our entrepreneur had a moribund website with one comment and no sales. So genAI is bad at business. That's a bit of a weak-man argument (I specifically searched for "generative ai is dumb as rocks what are we doing"). But it does demonstrate a pattern I've enco...

Fluidity
Radical Progress Without Scary AI

Fluidity

Play Episode Listen Later Mar 10, 2024 14:51


Radical Progress Without Scary AI: Technological progress, in medicine for example, provides an altruistic motivation for developing more powerful AIs. I suggest that AI may be unnecessary, or even irrelevant, for that. We may be able to get the benefits without the risks. https://betterwithout.ai/radical-progress-without-AI What kind of AI might accelerate technological progress?: “Narrow” AI systems, specialized for particular technical tasks, are probably feasible, useful, and safe. Let's build those instead of Scary superintelligence. https://betterwithout.ai/what-AI-for-progress

1000 Hours Outsides podcast
1KHO 249: What's the Right Age to Get an AI Girlfriend? | Jeff Schacher, Rethinkifi

1000 Hours Outsides podcast

Play Episode Listen Later Feb 22, 2024 53:01


We're diving into conversational AI... which is already here

Blockchain DXB
Saudi Arabia: AI Frontier - Beyond Humans, this episode is entirely written by AI. The voice has been closed using Ai-enabled software

Blockchain DXB

Play Episode Listen Later Feb 17, 2024 23:37


Title: Saudi Arabia: AI Frontier Description: Welcome to "Saudi Arabia: AI Frontier," your gateway to the cutting-edge developments in Artificial Intelligence within the Kingdom of Saudi Arabia. Join us on a journey through the exciting world of AI as we explore its transformative impact across various industries, its historical evolution, ethical considerations, and future trends. Hosted by George from Blockchain DXB, this podcast series delves deep into Saudi Arabia's AI initiatives, research, and innovations, providing insights from leading experts and policymakers. Episode 1: Unveiling the AI Landscape Introduction: In the inaugural episode of "Saudi Arabia: AI Frontier," host George from Blockchain DXB introduces listeners to the podcast's mission of exploring AI developments within Saudi Arabia. The episode sets the stage for the series by highlighting Saudi Arabia's ambitious Vision 2030 initiative and its focus on leveraging AI to drive innovation and economic growth. Key Highlights: Overview of Blockchain DXB and its transition to exploring AI in Saudi Arabia. Introduction to AI and its significance in Saudi Arabia's Vision 2030 initiative. Discussion on the historical evolution of AI, from early pioneers to modern-day advancements. Exploration of different types of AI, including Narrow AI, General AI, and Superintelligent AI. Examination of AI technologies such as Machine Learning, Deep Learning, Neural Networks, Natural Language Processing, and Computer Vision. Segment Highlights: Healthcare: AI's role in improving patient care and outcomes. Finance: AI's impact on algorithmic trading, fraud detection, and customer service. Automotive: AI's contribution to autonomous vehicles and smart transportation. Retail: AI-driven technologies enhancing the customer experience and operational efficiency. Education: AI's transformation of learning and teaching methods. Ethical Considerations: Addressing bias in AI algorithms. Privacy concerns in AI applications. Job displacement due to automation. Ethical AI development and deployment. Regulatory challenges and legal frameworks. Challenges and Limitations: Data quality and quantity. Interpretability and transparency of AI algorithms. Overcoming biases in AI systems. Regulatory and legal challenges. Ethical dilemmas in AI development and deployment. Future Trends: Continued advancements in AI technologies. Integration of AI with other emerging technologies. Ethical and regulatory developments. Potential societal impacts and transformations of AI. Conclusion: As the episode draws to a close, George reflects on the key takeaways and insights gained, emphasizing the importance of ethical AI practices and responsible innovation in Saudi Arabia's AI journey. He expresses gratitude to listeners, guests, and sponsors for their support and invites them to stay tuned for future episodes. Closing Remarks: George signs off, leaving listeners with a message of optimism for Saudi Arabia's AI future and a reminder to subscribe to "Saudi Arabia: AI Frontier" for future episodes. For AI Prompt click here For a Podcast speech written by chatGPT4 click here AI Voice cloning software suggested by chatGPT4 in this episode used is Descript

The Tech Trek
Identifying Lasting Trends in AI and the Changing Enterprise Software Model

The Tech Trek

Play Episode Listen Later Feb 6, 2024 36:33


In this episode, Amir Bormand interviews Ross Fubini, the managing partner at XYZ Venture Capital, to discuss the growing AI opportunity and how to identify lasting trends in the field. They delve into the impact of AI on enterprise software models and its potential for cost savings. Ross shares insights on XYZ's investment focus on enterprise and fintech companies, including those in the public sector. With a portfolio of 107 businesses, XYZ aims to support and build these companies actively. Please tune in to gain valuable insights into the AI landscape and its potential for transformative growth. Highlights: [00:00:04] Growing AI opportunity. [00:02:43] AI and market shifts. [00:05:45] The Internet's transformative potential. [00:09:18] AI opportunity and emerging technologies. [00:10:20] AI-powered consulting and new enterprise systems. [00:15:38] Narrow AI application opportunities. [00:19:27] Pricing and market disruption. [00:23:12] AI growth in technology. [00:24:29] AI Opportunities and Adoption. [00:28:42] Leveraging AI for legacy brands. [00:30:04] Generative AI and business opportunities. [00:35:10] Looking back on emerging technologies. Guest: Ross Fubini is the founder and Managing Partner of XYZ Venture Capital. He's made a career as a successful early-stage technology investor and is passionate about connecting people to build insightful products. Ross is from Virginia and is now based in San Francisco, where he married a local with whom he has two children and goes over the top on holidays. He is an avid triathlete, marathon runner, and previous Ironman competitor. Ross likes to go faster in all things - from races to building companies. https://www.linkedin.com/in/fubini/ ---- Thank you so much for checking out this episode of The Tech Trek, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests?Please message me at https://www.linkedin.com/in/amirbormand (Amir Bormand)

Tech and Politics
Artificial intelligence and democracy (once more)

Tech and Politics

Play Episode Listen Later Jan 17, 2024 20:20


The success and widespread deployment of artificial intelligence (AI) have raised awareness of the technology's economic, social, and political consequences. The most recent step in AI development -- the application of large language models (LLMs) and other transformer models to the generation of text, image, video, or audio content -- has come to dominate the public imaginary of AI and has accelerated this discussion. But to assess AI's societal impact meaningfully, we need to look closely at the workings of the underlying technology and identify the areas of contact within democracy.

The Nonlinear Library
EA - Why can't we accept the human condition as it existed in 2010? by Hayven Frienby

The Nonlinear Library

Play Episode Listen Later Jan 10, 2024 3:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why can't we accept the human condition as it existed in 2010?, published by Hayven Frienby on January 10, 2024 on The Effective Altruism Forum. By this, I mean a world in which: Humans remain the dominant intelligent, technological species on Earth's landmasses for a long period of time (> ~10,000 years). AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact. Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety. Mind uploading is impossible or never pursued. Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued. Any form of transhumanist initiatives are impossible or never pursued. No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe. Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime. Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means. My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience. So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)? Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here. References: Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures. Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

M&A WAR STORIES - The Good, The Bad and The Ugly
TRYING TO DEFINE AI IN ALL ITS GUISES - THREAT OR BENEFIT - OPTIMISM OR WORRY

M&A WAR STORIES - The Good, The Bad and The Ugly

Play Episode Listen Later Nov 17, 2023 25:58


In the last episode, the dynamic duo (Rob and Toby) started to talk about the emergence of AI, and this week, we continued the conversation to try and put some definition to AI.  And we also tried (but failed) to stay grounded and not get dragged into the stratosphere around AI.Anyway, this week's podcast was more serious than others as we sub-categorised AI into Machine Learning, Deep Learning, Narrow AI, General AI and Super AI; and that's probably just touching the surface.We also tried to understand our personal position (as intelligent laypersons) on whether we were optimistic about AI's potential or deeply concerned about some of the dangers that AI might pose. And that ranged from the benefit to humanity as a whole (imagine AI bringing education to everyone irrespective of race, or social standing) right through to AI being harnessed for personal gain by the super-wealthy and politically powerful. And there's the easy leap from trying to define AI into a stratosphere discussion in the space of a few minutes.The good news is we recognized our distraction and solemnly promised that we would return to AI basics and discuss its use and potential use in M&A transactions. That's next week's topic so tune in to find out if we managed to contain ourselves.

Earthlings Podcast
S3:E7 The Fast Future of AI with David Kruguer, Cambridge University

Earthlings Podcast

Play Episode Listen Later Sep 26, 2023 36:50


Hello Earthlings! In today's episode, our host Lisa Ann Pinkerton (CEO of Technica Communications) and our guest David Krueger (Assistant Professor at Cambridge University) discuss the blisteringly fast future of AI! In his work at Cambridge, David helps with the computational lab and its machine learning departments. He focuses on deep learning algorithms, AI alignment and in his words “whatever we can do to prevent the chance that AI leads to human extinction.” David has a vast knowledge of the ways an AI system could potentially disrupt and in some cases even jeopardize human life as we know it.We explore the fundamental differences between Narrow AI and General AI, analyze potential risks, and ponder the implications of Superhuman AI. Join us on this journey through the evolving landscape of artificial intelligence! Web Resources:Our podcast websiteTo support the show, head over to our Patreon Page!Thanks to Resource Labs for having us on the network!Technica's websiteWomen in Cleantech & Sustainability website

Digital Conversations with Billy Bateman
Hippo Highway: Steering ChatGPT - A Tactical Guide to Writing Subject Lines and Winning Campaign Ideas

Digital Conversations with Billy Bateman

Play Episode Listen Later Sep 5, 2023 51:56


Steve Error, Sales Director at Signals, Jordan Crawford, Founder of Blueprint, and Dan Baird, Co-Founder & Product Lead at Wrench.ai, bring to light the different uses ChatGPT offers. These speakers from the AI Revenue Summit discuss winning campaign ideas and how AI can be used efficiently. To stay current on our latest events, follow us on Linkedin. Useful Timestamps: 3:53 - AI and Key Concepts6:31 - GPT's explained10:41 - Campaign ideas and custom instructions12:50 - Who are the three musketeers that can help me? 18:42 - Spend time tuning GPT to automate exactly what you want22:21 - Subject line ideas using GPT26:32 - Introduce something that's surprising to make it memorable32:01 - What data matters?33:32 - Additional risks and pitfalls35:18 - Prompt designs & engineering examples37:49 - Tree of Thought Example - ChatGPT Splitter38:48 - Introduction of custom instructions39:42 - Behavioral insights and traditional data46: 28 - Wrench.ai OFFER47:12 - Blueprint OFFER50:00 - Closing Remarks

Wealth Made Simple Podcast
AI: Different Levels

Wealth Made Simple Podcast

Play Episode Listen Later Aug 30, 2023 26:03


In this episode of Wealth Made Simple, Shaz and Kieron delve into the fascinating world of AI (Artificial Intelligence). They begin by dispelling common misconceptions about AI, such as the fear of robots taking over, and provide an overview of the different levels of AI, from narrow AI to artificial superintelligence.  KEY TAKEAWAYS AI encompasses various levels of intelligence, ranging from narrow AI to artificial superintelligence. Narrow AI refers to AI systems designed for specific tasks, while artificial superintelligence represents AI systems with human-like cognitive abilities. The potential benefits of AI are immense, including solving complex problems, advancing scientific research, and revolutionizing society. However, there are also ethical concerns and risks associated with the development of superintelligence. Narrow AI is the current level of AI that exists, and it is highly specialized and limited to specific tasks. Examples include voice assistants like Siri and Alexa, as well as chatbots and spam filters. AI can be a valuable tool for reducing research time and improving productivity and efficiency. However, it is crucial to sense check and verify the information provided by AI, as it can generate false or inaccurate results if not properly trained or guided. BEST MOMENTS "Superintelligence will be able to outsmart humans. It will be able to outmaneuver us." "It can solve world problems the way that we solve a child's problems." "It will just increase overall in every industry the efficiencies within our workplaces and jobs." "The future holds a complete idea of growth around all of this and it will just continue to grow." VALUABLE RESOURCES shaz@aaa-accountants.co.uk  https://www.facebook.com/shaznawaz11 https://www.linkedin.com/in/shaznawaz/ https://www.instagram.com/parkwardlabour/ https://www.youtube.com/channel/UCJCDESxIqYjhiIBxJ90nKwQ?disable_polymer=true Shaz Nawaz is a serial entrepreneur; he owns five thriving businesses in diverse sectors. Shaz is committed to helping business owners build successful businesses. Having conducted over 3,000 business growth consultations he has helped his clients generate millions in additional profits. His purpose is to inspire business owners to build businesses that are hugely profitable and sustainable. He is a huge advocate of having multiple streams of income. He has written a number of business books and regularly contributes articles to mainstream media outlets.This show was brought to you by Progressive Media

The Actionable Futurist® Podcast
S5 Episode 22: Navigating the World of General Artificial Intelligence with Peter Voss from Aigo.ai

The Actionable Futurist® Podcast

Play Episode Listen Later Aug 26, 2023 19:03 Transcription Available


Can machines really think like humans? What is the future of General Artificial Intelligence (GAI) when machines more closely resemble human behaviour than ever before? In this episode, Peter unveils his journey into the realm of General Artificial Intelligence (GAI) and his vision of machines possessing the ability to think, learn, and reason like humans. We look at the intricacies of General AI and how it sets itself apart from narrow AI.The episode also looks at how companies are tackling the immense challenges associated with crafting machines with general intelligence - from understanding the significance of concept formation in artificial general intelligence to discussing the role of quantum computing and resources in AI development.Peter also provided his views on the potential of machines developing empathy and the role of AI in ethical and moral debates, and answers the questions I've always wanted to ask - can AI feel empathy and love? Finally, we take a peek into the future as Peter shares how Aigo.ai is harnessing the power of conversational AI to revolutionize personalised experiences. More on PeterPeter on LinkedInPeter on MediumAigo websitePeter Voss is the world's foremost authority in Artificial General Intelligence. He coined the term 'AGI' in 2001 and published a book on Artificial General Intelligence in 2002. He is a Serial AI entrepreneur, technology innovator who has for the past 20 years been dedicated to advancing Artificial General Intelligence. Peter Voss' careers include being an entrepreneur, engineer and scientist. His experience includes founding and growing a technology company from zero to a 400-person IPO. For the past 20 years, his focus has been on developing AGI (artificial general intelligence). In 2008 Peter founded Smart Action Company, which offers the only call automation solution powered by an AGI engine. He is now CEO & Chief Scientist at Aigo.ai Inc., which is developing and selling increasingly advanced AGI systems for large enterprise customers. Peter also has a keen interest in the inter-relationship between philosophy, psychology, ethics, futurism, and computer science.Your Host: Actionable Futurist® & Chief Futurist Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereAndrew's upcoming book

A Beginner's Guide to AI
Types of AI: Narrow AI vs. General AI vs. Superintelligent AI

A Beginner's Guide to AI

Play Episode Listen Later Jul 22, 2023 13:12


In this engaging episode of "A Beginner's Guide to AI," we delve into the diverse types of artificial intelligence: Narrow AI, General AI, and Superintelligent AI. From discussing the practical influence of Narrow AI to pondering the theoretical boundaries of General and Superintelligent AI, the episode offers an immersive dive into the depth of AI categories. Real-world examples and intriguing case studies provide tangible insight, making the intricate world of AI accessible and captivating. Plus, we engage listeners with thought-provoking questions and invite hands-on exploration of AI tools. Businesses interested in AI projects will find our professional guidance invaluable. Are you interested in learning to use AI? Take a look at my free class on Udemy: The Essential Guide to Claude 2! This podcast was generated with the help of artificial intelligence, offering a remarkable testament to the AI technologies we discuss. Tune in and join us on this enlightening journey through the AI landscape. Music credit: "Modern Situations by Unicorn Heads"

Data Gurus
Leveraging AI for Insights Generation with Michalis Michael at DMR Group

Data Gurus

Play Episode Listen Later Jul 11, 2023 23:20


In this episode, host Sima Vasa is pleased to welcome Michalis Michael, CEO at DMR Group, who shares the work his organization is doing with AI to be more efficient in both gathering and analyzing market data. Michalis shared some important insights on the evolution of AI: - AI can be categorized into different types: Narrow AI and Strong AI. - Narrow AI includes machine learning models and is a useful data analytics tool. - Machine learning models are most effective when they're trained from human inputs, which still leaves some room for subjectivity in the final results. - AI is not a silver bullet but a time-saving tool that more efficiently aggregates and categorizes massive amounts of data. As Michalis tells it, the best AI tools still require a human touch to implement them in a meaningful use case. - DMR - Sima Vasa - Infinity Squared - Infinity Squared - LinkedIn Thanks for listening to the Data Gurus podcast, brought to you by Infinity Squared. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show, and be sure to subscribe so you never miss another insightful conversation. #Analytics #M&A #Data #Strategy #Innovation

hy Podcast
Folge 242 mit Jan Hasse: Wie Firmen Künstliche Intelligenz jetzt praktisch nutzen können

hy Podcast

Play Episode Listen Later Jul 11, 2023 30:48


Vor Jahrzehnten erfunden und neuerdings technisch ausgereift, revolutioniert Künstliche Intelligenz den Alltag von Unternehmen. Doch was bedeutet das genau? Wie setzt man ChatGPT, Midjourney, Stable Diffusion & Co. in der Praxis ein? Welche Effizienz- und Qualitätsgewinne erzielt man damit und zu welchen Kosten? Was bringt es, erprobte Narrow AI mit neuartiger Generative AI zu verdienen? Wo im eigenen Unternehmen setzt man an, wie baut man Arbeitsgruppen auf und wie läuft die Implantierung? Viele Unternehmen wissen, dass sie jetzt dringend handeln müssen, suchen aber nach dem richtigen Ansatz. Jan Hasse ist Geschäftsführer der hy Technologies, einer 100-Prozent-Tochter der Axel Springer hy GmbH. Dieses Unternehmen konzentriert sich auf Künstliche Intelligenz. Es berät seine Kundinnen und Kunden in Projekten und stellt darüber hinaus eigene Software-Lösungen her. hy Technologies ist gewissermaßen der AI-Arm von hy. Geschäftsführer Jan Hasse verfügt über mehr als ein Jahrzehnt Erfahrung bei diesem Thema. Bevor er zu uns kam, war er Director AI Innovations & Ecosystems bei Deloitte und gründete danach Cloe.ai, einen Self-Service Marktplatz für KI-Lösungen aufzubauen. Mit uns spricht er über Grundlagen, Konzepte, Preise und Chancen von Künstlicher Intelligenz. Präzise und praxisnah berichtet er aus der Werkstatt der Implementierung. Was funktioniert und was funktioniert nicht? Eine Folge für alle, die KI besser verstehen möchten. Und die in ihren eigenen Unternehmen schnell damit vorankommen möchten. Ihnen hat die Folge gefallen? Sie haben Feedback oder Verbesserungsvorschläge? Dann schreiben Sie uns gerne an podcast@hy.co. Wir freuen uns über Post von Ihnen.

Machine Learning Podcast - Jay Shah
Promises and Lies of ChatGPT - understanding how it works | Subbarao Kambhampati

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Jun 7, 2023 166:43


Dr. Subbarao Kambhampati is a Professor of Computer Science at Arizona State University and the director of the Yochan lab where his research focuses on decision-making and planning, specifically in the context of human-aware AI systems. He has been named a fellow of AAAI, AAAS, and ACM in recognition of his research contributions and also received a distinguished alumnus award from the University of Maryland and IIT Madras.Check out Rora: https://teamrora.com/jayshahGuide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023Rora's negotiation philosophy:https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies00:00:00 Highlights and Intro00:02:16 What is chatgpt doing?00:10:27 Does it really learn anything?00:17:28 Chatgpt hallucinations & getting facts wrong00:23:29 Generative vs Predictive Modeling in AI00:41:51 Learning common patterns from Language00:57:00 Implications in society01:03:28 Can we fix chatgpt hallucinations? 01:26:24 RLHF is not enough01:32:47 Existential risk of AI (or chatgpt) 01:49:04 Open sourcing in AI02:04:32 OpenAI is not "open" anymore02:08:51 Can AI program itself in the future?02:25:08 Deep & Narrow AI to Broad & Shallow AI02:30:03 AI as assistive technology - understanding its strengths & limitations02:44:14 SummaryArticles referred to in the conversationhttps://thehill.com/opinion/technology/3861182-beauty-lies-chatgpt-welcome-to-the-post-truth-world/More about Prof. RaoHomepage: https://rakaposhi.eas.asu.edu/Twitter: https://twitter.com/rao2zAlso check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

Unf*ck Your Data
Mythos AI einfach erklärt generative AI und die Daten dahinter | Alexander Loth

Unf*ck Your Data

Play Episode Listen Later May 24, 2023 40:25


Seit dem phänomenalen Durchbruch von ChatGPT ist das Thema AI (oder KI) quasi in der Gesellschaft omnipräsent. Doch was genau sind diese generativen AIs und welche Daten nutzen und brauchen diese Tools? Darüber spricht Christian Krug, Host des Podcasts Unf*ck Your Data, mit dem Alexander Loth von Microsoft.Sei es das Zusammenfassen langer Text, das malen eines ansprechenden Bildes oder das Übersetzen von Code in eine andere Programmiersprache. Derzeit scheint wenig unmöglich für die endlos aus dem Boden sprießenden AI Tools. Damit diese vermeintlichen Wunderwerke aber überhaupt möglich sind müssen die sog. Modelle der AI trainiert werden. Dabei „lernen“ sie. Und zum lernen brauche ich jede Menge Daten. Was heißt, dass auch bei der KI die Qualität des Resultats wesentlich von der Qualität und Menge der eingesetzten Daten abhängt.Derzeit bewegen wir uns im Kontext der Narrow AI, diese können einzelne Aufgaben wie das Malen eines Bildes so gut wie oder besser als ein Mensch. Das heißt in der nächsten Zeit sind AI Tools genau das. Werkzeuge! Die entscheidenden Fähigkeiten werden sein, wie bediene ich diese neuen Werkzeuge um besser und schneller arbeiten zu können. Außerdem sind die Kenntnis der Tools und die Auswahl des richtigen Werkzeugs für eine bestimmte Aufgabe essentiell für den Erfolg. Hier werden Menschen, die das beherrschen die Nase weit vorn haben.Die menschlichen Kommunikationsfähigkeiten sich präzise Auszudrücken und die Anforderung gut in Prompts verwandeln zu können sind der Schlüssel zum Erfolg.Alex erklärt dabei anschaulich und einfach wie die AIs grob ticken und warum hier auch auf richtig guten Tools mal „Made in Germany“ steht.▬▬▬▬▬▬ Profile: ▬▬▬▬Zum LinkedIn-Profil von Alex: https://www.linkedin.com/in/aloth/Zum LinkedIn-Profil von Christian: https://www.linkedin.com/in/christian-krug/Unf*ck Your Data auf Linkedin: https://www.linkedin.com/company/unfck-your-data▬▬▬▬▬▬ Buchempfehlung: ▬▬▬▬Buchempfehlung von Alexander: Der Anschlag - Stephen King▬▬▬▬▬▬ Hier findest Du Unf*ck Your Data: ▬▬▬▬Zum Podcast auf Spotify: https://open.spotify.com/show/6Ow7ySMbgnir27etMYkpxT?si=dc0fd2b3c6454bfaZum Podcast auf iTunes: https://podcasts.apple.com/de/podcast/unf-ck-your-data/id1673832019Zum Podcast auf Google: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vdW5mY2steW91ci1kYXRhLw?ep=14Zum Podcast auf Deezer: https://deezer.page.link/FnT5kRSjf2k54iib6▬▬▬▬▬▬ Kontakt: ▬▬▬▬E-Mail: christian@uyd-podcast.com▬▬▬▬▬▬ Timestamps: ▬▬▬▬▬▬▬▬▬▬▬▬▬00:00 Intro01:39 Alexander stellt sich vor03:07 Generative AI als Teilbereich der AI04:13 Narrow, General oder Super AI04:53 Der AI Durchbruch war so kaum zu erwarten06:10 Die Evolution von GPT07:53 Ohne gute Testdaten gibt es auch keine gute AI12:12 Eine top tier AI aus Deutschland15:34 Derzeit schafft generative AI keine wirklich neuen Inhalte19:03 Generative AI als mächtiges Werkzeug, aber immer noch eine narrow AI20:41 Von Narrow AIs geht es zur General AI25:39 Einige AIs lösen Probleme schneller als viele Menschen28:35 Prompt Engineering und AI Literacy sind wichtige Skills33:02 Auch AIs können halluzinieren35:31 Zwei Fragen an Alexander

Arek Skuza
Podcast #90 - Mornings with Data and AI -- Why narrow AI is better for monetization

Arek Skuza

Play Episode Listen Later May 8, 2023 4:01


Let's dive into more details about ChatGPT Plugins - the big revolution in GPT engines.

Generations Radio
What is Artificial Intelligence? - A Biblical Worldview on AI

Generations Radio

Play Episode Listen Later Apr 25, 2023 27:00


For those who are asking the question - What is AI, and should I be concerned----Here is a simple introduction to General AI and Narrow AI.--For those who have the right worldview -a biblical worldview-, here is the best interpretation of this man-invented system promising for the world another Tower of Babel.--How does the Christian interact with it-- Does the Christian family unplug from the matrix----This program includes---1. The World View in 5 Minutes with Adam McManus -Fox News abruptly fired Tucker Carlson, FBI yet to release transgender's manifesto in Christian school rampage, Donald Trump honored Pastor Charles Stanley---2. Generations with Kevin Swanson

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary Series – Loss Function, Cost Function and Gradient Descent

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Apr 21, 2023 12:48


In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Loss Function, Cost Function and Gradient Descent, explain how these terms relates to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary Glossary Series: Artificial Intelligence Glossary Series: Artificial General Intelligence (AGI), Strong AI, Weak AI, Narrow AI Glossary Series: Heuristic & Brute-force Search AI Glossary Series – Machine Learning, Algorithm, Model Glossary Series: (Artificial) Neural Networks, Node (Neuron), Layer Glossary Series: Bias, Weight, Activation Function, Convergence, ReLU Glossary Series: Perceptron Glossary Series: Hidden Layer, Deep Learning Continue reading AI Today Podcast: AI Glossary Series – Loss Function, Cost Function and Gradient Descent at AI & Data Today.

Investor Connect Podcast
Startup Funding Espresso -- Types of AI

Investor Connect Podcast

Play Episode Listen Later Apr 6, 2023 2:05


Types of AI Hello, this is Hall T. Martin with the Startup Funding Espresso -- your daily shot of startup funding and investing. There are many types of Artificial Intelligence. Here's a list of the types: Narrow AI -- focuses on a single task or narrow range of functions. Applications include image recognition, voice recognition, and translation applications. Generative AI - focuses on a wide range of topics and seeks to generate answers to prompts. Applications include marketing content generation, sales administration, and general education queries. Reactive AI -- responds to a current state of the situation such as a game and responds with the next potential move. IBM's Deep Blue, which plays chess games, is an example. Limited Memory AI -- uses past data to understand the situation and react to it. Self-driving automobiles are an example. Theory of the mind -- uses physical cues to determine the emotional or psychological state of someone. Robotic applications that can sense human emotion through visual signals are a good example. Self-awareness AI -- the ability to understand human conditions and detect human emotion. This is a type of AI that is shown in films for what the future may hold. Consider these types of AI and how they may apply to your application.   Thank you for joining us for the Startup Funding Espresso where we help startups and investors connect for funding. Let's go startup something today. _______________________________________________________ For more episodes from Investor Connect, please visit the site at:   Check out our other podcasts here:   For Investors check out:   For Startups check out:   For eGuides check out:   For upcoming Events, check out    For Feedback please contact info@tencapital.group    Please , share, and leave a review. Music courtesy of .

The Emotional Intelli-Gents Podcast: Navigating Leadership with Emotional intelligence
Ep 06: Embracing the Future: Artificial Intelligence and Emotional Intelligence in the Workplace

The Emotional Intelli-Gents Podcast: Navigating Leadership with Emotional intelligence

Play Episode Listen Later Apr 5, 2023 39:20


In this episode of the Emotional Intelli-Gents Podcast we focus on a topic that has been dominating the international tech conversation recently by changing the way people live and work.  What we are referring to is artificial intelligence technology platforms like ChatGPT and Midjourney. Platforms like these, and others, have undoubtedly taken the world by storm. Without question, there is major utility in products and platforms such as ChatGPT.  We also know the forthcoming generation of AI advancements will eclipse what we have available today.  Specifically, the episode will focus on how AI technology, once fully developed and relied on, could bolster a leader's ability to lead in an emotionally intelligent way.  We will also explore the alternative, of how AI technology may hinder a leader's ability to lead with EQ.The episode will begin by sharing a very basic explanation of the current state of AI.  We define the differences between narrow AI and general AI.  Narrow AI is what we see available today in the marketplace with programs like ChatGPT.  Narrow AI is designed to analyze a large mass of data available to it and complete tasks the tool is directed to complete.  General AI, on the other hand, is still a work in progress and is being developed as we speak.  General AI is distinct in that it has the ability to apply reasoning and will be able to replicate human intelligence.  Narrow AI tools do not have the capacity to replace humans in the workplace at scale, but will act more as a complement to the current workforce, allowing us all to leave the menial tasks to AI and focus on higher value work.  One cross-section of AI and EQ we explore in great detail in the episode, are tools, currently in development, that will be able to provide data on the emotions of specific individuals employed by an organization.  AI tools will have the ability to use natural language processing (NLP), facial and body language recognition, and tone analysis to complete sentiment analyses of individuals.  The sentiment analysis will then draw conclusions about an individual's level of engagement or general mood towards work and provide feedback to leaders.  In this episode, we discuss the pros and cons of such technology eventually taking hold.  There are arguments to be made on both sides.  For starters, this type of AI technology can create ethical or privacy concerns for individuals.  It can also cause leaders to rely too heavily on the tool's assessment of an individual's emotions, rather than relying on critical human leadership skills like intuition and empathy to draw similar conclusions. On the flip side, there is an argument that more information, whether derived through human to human interaction, or AI tools, is a good thing.  For companies, looking to implement more policies to better emotionally support their workforce, this type of data can be invaluable.  Sentiment data, if collected responsibly can have a real impact on bridging the gap between how employees are feeling at work, and how they want to feel at work. EP06, in its entirety, tackles a very relevant topic of AI and its inevitable downstream implications for all of us.  Please join us by listening in and giving us your feedback.  Do you see value in AI tools giving businesses insights on their team's emotional state? How will this impact leader's ability to lead with emotional intelligence? Feel free to send us an email at info@emotionalintelligents.com and share your thoughts or visit us at https://linktr.ee/emotionalintelligents

Creedal Catholic
E137 The Emergence of AI w/Alex Fogassy

Creedal Catholic

Play Episode Listen Later Mar 20, 2023 42:05


Today on the show, I welcome my friend Alex Fogassy, Air Force officer and doctoral candidate at the University of Oxford. We talk about: "What is Artificial Intelligence?" The difference between Narrow AI and AGI What the limitations and threats are of Narrow AI What developments in these domains (or lack thereof) portend for our future The parallels between transhumanism and Christianity ...and much more. Feedback? Email me: zac@creedalpodcast.com

The Catchup
A Conversation with Bing: Our First-Hand Review of the AI Chatbot

The Catchup

Play Episode Listen Later Mar 13, 2023 109:57


In this episode of The Catchup, we're excited to give our own first-hand review of the Bing AI chatbot, which is based on ChatGPT. A few weeks ago, we discussed other people's experiences with the program, but now we're ready to share our own thoughts and insights.We'll start by giving a brief overview of the Bing AI chatbot and how it works. Then, we'll dive into our personal experiences using the chatbot and share some of our favorite conversations. We'll talk about the strengths and weaknesses of the chatbot, and discuss whether we think it's a valuable tool for you.Finally, we'll wrap up the episode by discussing the future of the Bing AI Chatbot, and what it means for the future of the Artificial Intelligence-powered search engine industry. Support the showLet's get into it!Follow us:FacebookInstagramYouTubeShopEmail us: TheCatchupCast@Gmail.com

Health Pilots
AI for Self-Service: Learning to Structure Adaptive Digital Assistance

Health Pilots

Play Episode Listen Later Mar 9, 2023 31:59


Artificial intelligence (AI) has been an emerging hot topic over the last several months with the rise of Open AI's ChatGPT, Microsoft's integration of ChatGPT technology into its Bing search engine, and Google's announcement of its own chatbot, known as Bard. And while there are concerns about the more “general AI” technologies built to improve neural network capabilities so they are comparable to those of humans, health care systems are able to expand their services by leveraging the more familiar “narrow” or single-task AI tools, such as virtual chat assistance. Deploying this kind of AI technology can lead to an enhanced self-service experience for patients. We welcome Matt White, Director of Innovation at Contra Costa Health Services (CCHS), who shares how they've begun to thoughtfully integrate AI technology in order to better understand their patient engagement, with the ultimate aim to provide a consistent experience across all digital channels.Learn more about the people, places, and ideas in this episode: Matt White, Director of Innovation at Contra Costa County Health ServicesHyro.aiTechnology Hub, a CCI program that helps organizations vet, pilot, evaluate, and spread innovative digital health solutions targeting Medicaid markets and historically underinvested communities

The Nonlinear Library
AF - Squeezing foundations research assistance out of formal logic narrow AI. by Donald Hobson

The Nonlinear Library

Play Episode Listen Later Mar 8, 2023 3:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Squeezing foundations research assistance out of formal logic narrow AI., published by Donald Hobson on March 8, 2023 on The AI Alignment Forum. Suppose you have a ML model trained to output formal proofs. Maybe you start with ZFC and then add extra tokens for a range of common concepts. (along with definitions. ). So a human mathematician needs to type in the definition of a gradient in terms of limits, and the definition of limits in terms of epsilon and delta, and the definition of the real numbers in terms of dedekind cuts. All the way back to ZFC. The human needn't type any proofs, just the definitions. The model could be trained by generating random syntactically correct strings of tokens, and trying to prove or disprove them. (Remember, we have added the notion of a gradient to the token pool, plenty of the random questions will involve gradients) Hopefully it forms intermediate theorems and heuristics useful towards proving a wide class of theorems. Computer programs can be described as mathematical objects. So the human adds some tokens for lisp programs, and a few definitions about how they behave to the token pool. "Will program X do Y?" is now a perfectly reasonable question to ask this model. This is where the magic happens. You give your system a simple toy problem, and ask for short programs that solve the toy problem, and about which many short theorems can be proved. Maybe you do gradient descent on some abstract latent space of mathematical objects. Maybe an inefficient evolutionary algorithm selecting both over the space of programs and the theorems about them. Maybe "replace the last few layers, and fine tune the model to do a new task", like RLHF in ChatGPT. Now I don't expect this to just work first time. You will want to add conditions like "ignore theorems that are true of trivial programs (eg the identity program)" and perhaps "ignore theorems that only take a few lines to prove" or "ignore theorems so obvious that a copy of you with only 10% the parameters can prove it". For the last one, I am thinking of the programmers actually training a mini version with 10% the parameters, and running some gradients through it. I am not thinking of the AI reasoning about code that is a copy of itself. The AI model should have a latent space. This can let the programmers say "select programs that are similar to this one" or "choose a program about which theorems close to this theorem in latent space can be proved". The idea of this is that Asking questions should be safe. There are a bunch of different things we can optimize, and it should be safe to adjust parameters until it is proving useful results not trivialities. The AI doesn't have much information about human psychology, or about quantum physics or the architecture of the processor it's running on. Gradient descent has been pushing it to be good at answering certain sorts of question. There is little to no advantage to being good at predicting the questions or figuring out what they imply about the people asking them. With a bit of fiddling, such a design can spit out interesting designs of AI, and theorems about the designs. This isn't a foolproof solution to alignment, but hopefully such help makes the problem a lot easier. It is ABSOLUTELY NOT SAFE to throw large amounts of compute at the programs that result. Don't have anything capable of running them installed. The programs and the theorems should be read by humans, in the hope that they are genius insights into the nature of AI. The textbook from the future. Humans can then use the insights to do... something. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

White Noise - Sleep - Study - Meditation - Sounds
What is Artificial Intelligence?

White Noise - Sleep - Study - Meditation - Sounds

Play Episode Listen Later Mar 8, 2023 3:05


Artificial intelligence (AI) is a term that refers to the ability of machines to perform tasks that normally require human intelligence, such as perceiving, reasoning, learning, problem-solving and decision-making. AI can be applied to various domains and industries, such as healthcare, education, entertainment, finance and more. AI can be classified into two main types: narrow AI and general AI. Narrow AI is designed to perform specific tasks within a limited domain, such as face recognition, speech recognition or chess playing. General AI is the hypothetical ability of machines to exhibit intelligence across any domain and task, similar to humans. While narrow AI has achieved remarkable results in recent years, general AI remains a distant goal for researchers. AI can also be categorized based on its methods and techniques. Some of the common methods include machine learning (ML), deep learning (DL), natural language processing (NLP), computer vision (CV) and robotics. Machine learning is the process of enabling machines to learn from data and experience without explicit programming. Deep learning is a subset of machine learning that uses artificial neural networks to model complex patterns and relationships in data. Natural language processing is the field of AI that deals with understanding and generating natural languages, such as English or Chinese. Computer vision is the field of AI that deals with analyzing and interpreting visual information, such as images or videos. Robotics is the field of AI that deals with creating machines that can interact with the physical world through sensors and actuators. AI has many benefits and challenges for society. Some of the benefits include improving productivity, efficiency, accuracy and innovation in various sectors; enhancing human capabilities and well-being; creating new opportunities for education, entertainment and communication; solving complex problems and advancing scientific discovery. Some of the challenges include ensuring ethical, legal and social implications; avoiding bias, discrimination and unfairness; ensuring safety, security and privacy; maintaining human dignity, autonomy and values; fostering trustworthiness, transparency and accountability. AI is an exciting field that has transformed many aspects of our lives and will continue to do so in the future. However, it also poses significant questions about its impact on humanity and our role in shaping its development. --- Send in a voice message: https://podcasters.spotify.com/pod/show/dwight-allen/message

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary Series: Artificial General Intelligence (AGI), Strong AI, Weak AI, Narrow AI

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Nov 11, 2022 9:59


In order to understand the goal of Artificial General Intelligence (AGI) which is also sometimes reffered to as Strong AI, it's important to understand the difference between strong AI versus narrow or weak AI. In this episode of the AI Today podcast, hosts Kathleen Walch and Ron Schmelzer share the definitions for AGI/Strong AI and Narrow/Weak AI and explain why the terms Strong AI and Weak AI have fallen out of favor in recent years. Continue reading AI Today Podcast: AI Glossary Series: Artificial General Intelligence (AGI), Strong AI, Weak AI, Narrow AI at Cognilytica.

Trans Resister Radio
Arrival of AGI or Narrow AI Has Become Too Convincing, Aot#355

Trans Resister Radio

Play Episode Listen Later Jun 22, 2022 55:31


A Google employee has claimed that a chatbot he was working on has attained sentience. This is an important story no matter what way you slice it.  Topics include: Blake Lemoine, LaMDA chatbot, sentient AI, AGI, Google, PR, machine learning, truth or fiction, power of narrow AI systems, consciousness, Medium blog, meditation, automating society, Dalle Mini, AI text prompt image generators, influence of technology, Jacques Ellul, Sam Aydlette, simulacra, AGI not part of business model, mind space, Simulation Hypothesis, Dancing With My Fate, Trickster God, white rabbit, surveillance, intuition, virtual worlds, emulating reality, impirical truth

Tech and Politics
Artificial intelligence and democracy

Tech and Politics

Play Episode Listen Later May 10, 2022 25:32


We encounter AIs daily, be it in the voice assistants in our homes and phones, through automation in our workplace, or as the drivers of policing or credit decisions. In this and the following episodes, we focus on the impact of AI on democracy.

Got Velocity?
Will you even have a job in 10 years with how fast technology is changing?

Got Velocity?

Play Episode Listen Later Feb 17, 2022 13:40


2:00 - Lucas: At our core, we are some version of communist4:25 - Craig - Q) What if you don't have the ability to produce in the future society?6:10 - Lucas- The question we have to ask is, what is the value of a human?6:29 - Lucas - The Ideological Debate - You eat what you kill... and if you can't kill, you can't eat8:42 - Lucas - We don't know if we are missing something in the future... What is the opportunity cost if we allow a person to die because they can't kill today10:30 - Is AI our biggest threat... Mentions and Linkshttps://lucasroot.com/Lucas LinkedIn - https://www.linkedin.com/in/lucroot/IG: @lucrootGet Access To Lucas's - WORKING FROM HOME MASTERY COURSE HEREInterested in working with Craig 1 on 1 ... go to getmybattleplan.com

Voices in Bioethics
Jay Shaw Discusses the Ethics of Narrow AI

Voices in Bioethics

Play Episode Listen Later Feb 15, 2022 24:14


Camille Castelyn interviews Professor Jay Shaw about which ethics and values underlie the current design of narrow AI. The discussion focuses on his team’s research on design ethics and a community-integrated approach to healthcare and digital systems in light of an often-dominant capitalist design approach.  Jay Shaw is Assistant Professor in the Department of Physical Therapy…

AI.cooking
AI episode 21: Picking Quarrels

AI.cooking

Play Episode Listen Later Dec 31, 2021 26:20


Latest news about Artificial Intelligence. AI knowledge corner: Narrow AI.

Let's talk Transformation...
#38 Learning tomorrow : Artificial Intelligence & neuroscience working hand in hand with Alexia Audevart

Let's talk Transformation...

Play Episode Listen Later Oct 4, 2021 26:32


"We are already living in the century of the brain" : What is the future of humanity and how do we learn in an increasingly digital world ? How do AI and neuroscience compliment each other ? Alexia and I have a conversation around bringing technology and neuro-cognition together : demystifying AI, neuroscience and what it brings to the learning landscape. AI is changing our world; the technological and neuro-revolution is transforming how we live and work, and our relationships with others. AI also plays a decisive role in the competitiveness of companies of all sizes and is revolutionising many sectors (e.g. marketing, CX, etc). All businesses are impacted, and companies must accept these new tools both to optimise tasks previously performed by humans and to constantly navigate change in an emerging environment.  Alexia shares insights and experience from her research and work with companies both big and small.  The main insights you will get from this episode are :  Creating value through technology means bridging the gap between human value and technological value and promoting diversity in technology. Data and technology are ever more present and connect us in ways we never thought possible. Neuroscience brings AI and human intelligence together. AI dates back to 1956 when scientists dreamt of recreating human cognitive functions in machines and unravelling the mysteries of intelligence (the Latin ‘intelligere' means ‘the ability to make connections'). Machine learning is based on statistical AI and data. Traditional programming uses computer data and is always accurate. ML on the other hand is not always accurate: it uses statistical information to create links, ‘learn' and then make predictions (modelling).   Narrow AI currently only covers simple, repetitive, specialised tasks. In some cases, AI outperforms humans, but machines cannot generalise and cannot yet go from narrow AI to general AI. We are learning more about neuro-cognition: AI learns quickly, and so must we.  Deep learning has led to major advances: artificial neural networks make simple calculations, with each layer deepening the level of understanding. This process is bio-inspired, analysing large amounts of data using computer science, mathematics and AI to gain a better understanding of how the human brain works.  How is AI regulated from an ethical point of view? Will there be universal rules for AI? Do we need new legislation? It is difficult to define an international ethical framework because different factors mean different results in different countries.  The ethics question has long been debated but we can start with robotics (cf. Asimov's laws): those developing the systems must seek to answer questions during the design phase (ethics by design).   What is the future of humanity? Will AI destroy the human race? What changes are expected? We are already living in the century of the brain, which has given rise to many ‘neuro-disciplines' such as neuro-education, neuro-economics, neuro-law. AI is changing our world; the technological and neuro-revolution is transforming how we live and work, and our relationships with others; it is also multidisciplinary, i.e. human, cultural and societal. AI is at the heart of other sciences, with a theoretical basis in statistics and mathematics but in combination with the human science of biology.  It plays a decisive role in the competitiveness of companies of all sizes and is revolutionising many sectors (e.g. marketing, CX, etc). All businesses are impacted, and companies must accept these new tools both to optimize tasks previously performed by humans and to constantly navigate change in an emerging environment. We must all understand the important concepts of AI: what it is, what machines can do, what the AI use cases are, what its impact will be on business, processes and us. We must educate ourselves, not be afraid and be...

Data And Analytics in Business
E53 - Anatoly Tulchinsky - Strategy and Approaches to Drive Better Business Outcomes Using Data & AI

Data And Analytics in Business

Play Episode Listen Later Apr 11, 2021 50:58


Anatoly Tulchinsky is one of those individuals in the data space who has got it all covered! Along with the full spectrum of technological knowledge, skill, and expertise, Anatoly accumulated extensive business acumen and industry knowledge throughout his 25 years of career. With his specialization in data monetization, advanced analytics, artificial intelligence, and machine learning, Anatoly has been working as Senior Director – Applied AI Solutions, Head of Data, Analytics & AI Practice at CGI. Established in 1976, CGI is among the largest IT and business consulting services firms in the world. With a presence across 21 industries in 400 locations worldwide, CGI provides comprehensive, scalable, and sustainable IT and business consulting services. Before joining CGI, Anatoly worked on numerous top technology positions at SMITH, IBM, and Cognos. He is the key creator and patent holder for IBM Watson Analytics technology. He partnered with IBM Research and drove some of the latest innovations into the Watson Analytics product. Also, developed Watson cognitive assets that use machine learning to self-learn and self-optimized in real-time. At Cognos, he created and patented a unique model-driven approach for implementing and maintaining analytical applications, which generated $2.8 million in sales in the first year and $6.2 million in the second year. Anatoly holds a total of 16 product patents in use today. He also regularly speaks at global conferences and summits. In this exclusive episode, Anatoly shares with us some valuable insights on: Latest technologies in data analytics that are enabling multifaceted business use cases. Use of data in customer personalization in B2B and B2C. Analyzing customer behaviour patterns from the company perspective in the B2B space. Government of Canada Scale AI program for Industry innovation. Impact of the program on future generations and better business outcomes. How other governments can replicate the same results. A high-level explanation of the organizational data strategy for the C-level. Importance of data governance in the overall strategy. How large part of the data cleansing can be automated with technology invented by Anatoly for IBM Watson. Go to market strategy for digital products or SaaS – the four aspects to consider Using data and analytics to incorporate customer understating into developing digital products - moving marketing into product development. General AI vs Narrow AI and what lies in the future. If you are in the C-level or in a leadership role or you have a digital product in the market, this is the episode you should not miss. DataAnalytics, DataScience, BigData, ArtificialIntelligence, MachineLearning, BusinessAnalytics, CustomerExperience --- Send in a voice message: https://anchor.fm/analyticsshow/message

Short And Sweet AI
OpenAI: For-Profit for Good?

Short And Sweet AI

Play Episode Listen Later Mar 8, 2021 5:34


One of the founding principles of OpenAI, the company behind technology such as GPT-3 and DALL•E, is that AI should be available to all, not just the few. Co-founded by Elon Musk and five others, OpenAI was partly created to counter the argument that AI could damage society. OpenAI was originally founded as a non-profit AI research lab. In just six short years, the company has paved the way for some of the biggest breakthroughs in AI. Recent controversy arose when OpenAI announced that a separate section of its company would become for-profit. In this episode of Short and Sweet AI, I discuss OpenAI's mission to develop human-level AI that benefits all, not just a few. I also discuss the controversy around OpenAI's decision to become for-profit. In this episode, find out: OpenAI's mission How human-level AI or AGI differs from Narrow AI How far we are from using AGI in everyday life The recent controversy around OpenAI's decision to switch to a for-profit model. Important Links and Mentions: https://drpepermd.com/podcast-2/ep-what-is-gpt-3/ (What is GPT-3?) https://openai.com/charter/ (OpenAI's mission statement) Resources: https://www.youtube.com/watch?v=H15uuDMqDK0 (Elon Musk on Artificial Intelligence) Technology Review: https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ (The messy, secretive reality behind OpenAI's bid to save the world) Wired: https://www.wired.com/story/compete-google-openai-seeks-investorsand-profits/ (To Compete With Google, OpenAI Seeks Investors---and Profits) Wired: https://www.wired.com/story/company-wants-billions-make-ai-safe-humanity/ (OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad Way) Episode Transcript: Hello to you who are curious about AI. I'm Dr. Peper and today I'm talking about a truly innovative company called OpenAI. So what do we know about OpenAI, the company unleashing all these mind-blowing AI tools such as GPT-3 and DALL·E? Open AI was founded as a non-profit AI research lab just 6 short years ago by Elon Musk and 5 others who pledged a billion dollars. Musk has been openly critical that AI poses the greatest existential threat to humanity. He was motivated in part to create OpenAI by concerns that human-level AI could damage society if built or used incorrectly. Human-level AI is known as AGI or Artificial General Intelligence. The AI we have today is called Narrow AI, it's good at doing one thing. General AI is great at any task. It's created to learn how to do anything. Narrow AI is great at doing what it was designed for as compared to Artificial General Intelligence which is great at learning how to do what it needs to do. To be a bit more specific, General AI would be able to learn, plan, reason, communicate in natural language, and integrate all of these skills to apply to any task, just as humans do. It would be human-level AI. It's the holy grail of the leading AI research groups around the world such as Google's DeepMind or Elon's OpenAI: to create artificial general intelligence. Because AI is accelerated at exponential speed, it's hard to predict when human-level AI might come within reach. Musk wants computer scientists to build AI in a way that is safe and beneficial to humanity. He acknowledges that in trying to advance friendly AI, we may create the very thing we are concerned about. Yet he thinks the best defense is to empower as many people as possible to have AI. He doesn't want any one person or a small group of people to have AI superpower. OpenAI has a 400-word mission statement, which prioritizes AI for all, over its own self-interest. And it's an environment where its employees treat AI research not as a job but as an identity. The most succinct summary of its mission has been phrased “… an ideal that we want AGI to go well” Two specific parts to its mission are to avoid building human-level AI that harms humanity or unduly concentrates...

New Leadership Podcast
Führung von Künstlicher Intelligenz – geht das? Benno Blumoser, Head Innovation - Siemens AI Lab

New Leadership Podcast

Play Episode Listen Later Feb 2, 2021 53:34


Die 29. Episode handelt von der Künstlichen Intelligenz und ihrem Potenzial, die Welt in Zukunft zu verbessern und die Grenzen der menschlichen Möglichkeiten zu erweitern. Welche Herausforderungen dabei zu überwältigen sind, wird in einer spannenden Diskussion zwischen Benno Blumoser, Head of Innovation bei Siemens AI Lab und Sebastian Morgner im Future of Leadership Podcast ausführlich erklärt. Künstliche Intelligenz (kurz: KI) übernimmt immer mehr Aufgaben des Menschen. Das Ziel ist es, die Welt mit Hilfe ihrer Unterstützung zu verbessern. Wann auch immer der Mensch in seiner Intelligenz herausgefordert wird, kommt die KI zum Einsatz. Wird diese nur für eine klar definierte Aufgabe angewandt und variiert die Herangehensweise an das Problem nicht, spricht man von der Narrow AI (schwache Künstliche Intelligenz). Die Strategie von Siemens ist der Schnittpunkt der KI und dem digitalen Zwilling, die als zwei wesentliche Treiber für die Entwicklung der digitalen Industrie gelten. Der digitale Zwilling führt dank der Kombination physikbasierter Simulationen mit Datenanalysen in einer vollständig virtuellen Umgebung zu neuen Erkenntnissen. Der Vorteil darin besteht, eventuelle Schäden des Physischen vorbeugen zu können. So könnte es bald sogar möglich sein, Krankheiten eines Menschen vorherzusehen und diese frühzeitig behandeln zu können. Man soll in Zukunft künstliche Mechanismen im digitalen Zwilling anwenden können, sozusagen eine „Alexa“ für die Industrie. Auf die Frage, ob die mögliche Kontrolle der KI gegenüber der natürlichen Intelligenz Sorgen bereitet, reagiert Benno Blumoser gelassen. Die sogenannte General AI gibt es nur in Science-Fiction Filmen und er betont, dass die Verantwortung größer ist, bzw. die Entscheidung bewusster getroffen werden, je mächtiger die Technologie ist. So kommt er zu dem Schluss, dass beispielsweise in sicherheitsrelevanten Bereichen kein „Machine Learning“ vorhanden sein darf. Die Grenze der Autonomie besteht laut seiner Aussage in der Selbstoptimierung bis zur Unnachvollziehbarkeit des Menschen. Die Herausforderung liegt darin, dass es noch kein ultimatives System gibt, um Autonomie in dezentralen Strukturen effizienter zu gestalten. Außerdem liegt die Schwierigkeit auch in der Schnittstelle zwischen der traditionellen, hierarchischen Welt und den hierarchielosen „Räumen“.

DMRadio Podcast
Narrow AI and the Incremental Optimization of Business

DMRadio Podcast

Play Episode Listen Later Dec 13, 2020 75:56


Eric Kavanagh The Strategic CDO - Syndicated Radio Host     Rajiv Shah Rajiv Shah is a data scientist a DataRobot, where his primary focus is on helping customers achieve success with AI.   Robin Grosset Chief Technology Officer at MindBridge Ai   Larsh Johnson Chief Technology Officer at Stem, Inc.  

The Aerospace Executive Podcast
AI: Sci-Fi Threat or a Vital Tool for the Future? w/Graham Plaster Part 2

The Aerospace Executive Podcast

Play Episode Listen Later Nov 19, 2020 12:40


In the mid-1990's nobody could have predicted how far and how fast the internet would develop and the past 30 years have been revolutionary.   Once limited to science fiction, Artificial Intelligence is becoming our reality. That being said, the AI we've seen in movies isn't quite the same as what we're seeing in real life today.    When we think of AI, we tend to think of robots with human qualities and potentially dangerous outcomes, but how much of that is true? Is AI as risky as science fiction may lead us to believe?   In this episode, CEO of TheIntelligenceCommunity.com, Graham Plaster returns to the Aerospace Executive Podcast to explain how AI is really being used today, and how it might be used in the future.  With AI, we have to figure out the ethical issues with privacy and autonomy and ask ourselves how much of our lives we want to outsource. -Graham Plaster   Three Things You'll Learn In This Episode    The difference between General and Narrow Artificial Intelligence:  Most of us associate AI with robots that can provide a human experience (like Siri), but that's only an illusion of interaction and doesn't offer much of a threat. On the other hand, Narrow AI goes deep on singular tasks like facial recognition, and that can be open to abuse.  What makes Narrow AI risky:  Narrow AI technologies like facial recognition have the benefit of identifying people all over the world more quickly and easily. However, if not developed with ethics in mind, facial recognition could pave the way for racial profiling and invasive surveillance.  How AI will impact warfare in the future:  As governments gain access to tools like drones and facial recognition, they'll have the ability to outsource special warfare to machines. This will have a major effect on how wars are fought, so we have to start thinking about the moral implications of using AI.     Guest Bio    Graham Plaster is an entrepreneur, defense innovation advisor, and author of the bestselling In The Shadow of Greatness. He is also the CEO and Founder of TheIntelligenceCommunity.com, a professional network for US government, military, national security, and private sector personnel. Prior to launching The Intelligence Community Inc, Graham served on active duty as a Naval Officer for over a decade. An expert advisor on emerging technology, Graham is passionate about learning about the newest advancements.    To find out more and to connect with Graham, visit: https://www.theintelligencecommunity.com https://www.linkedin.com/in/grahamplaster https://www.linkedin.com/mwlite/company/theintelligencecommunity-com  https://www.youtube.com/channel/UCX5r4lGXW1yNcxtoi9Q3lGw https://www.amazon.com/Shadow-Greatness-Leadership-Sacrifice-Americas/dp/1612511384  Learn More About Your Host:   Co-founder and Managing Partner for Northstar Group, Craig is focused on recruiting senior level leadership, sales, and operations executives for some of the most prominent companies in the aviation and aerospace industry. Clients include well-known aircraft OEM's, aircraft operators, leasing / financial organizations, and Maintenance / Repair / Overhaul (MRO) providers.    Since 2009 Craig has personally concluded more than 150 executive searches in a variety of disciplines. As the only executive recruiter who has flown airplanes, sold airplanes, AND run a business, Craig is uniquely positioned to build deep, lasting relationships with both executives and the boards and stakeholders they serve. This allows him to use a detailed, disciplined process that does more than pair the ideal candidate with the perfect opportunity, and hit the business goals of the companies he serves.   

Provenance Marketing Show
A Kiwi Original - Asa Cox | Arcanum 055

Provenance Marketing Show

Play Episode Listen Later Sep 8, 2020 37:29


Asa Cox is the CEO of Arcanum AI... a company that is empowering New Zealand businesses to see things previously unseeable or unknowable and that's translating into better business outcomes. Arcanum has built over 50 machine learning models that can learn from your video, text and data across a number of problems that humans can't solve on our own. While AI may sound futuristic, Asa makes it sound like now is the time to start exploring what you might want to see or know with the help of a dash or two of AI. LINKS MENTIONEDhttps://www.arcanum.aiKEY TIMESTAMPS2 mins 18 secs When machine learning and AI helps to achieve business outcomes. We look at some business examples.4 mins 31 secs Getting quality data from factory floor manufacturing machines into a data lake to extract insights.8 mins 16 secs Getting the business side of an organisation to understand the meaningful impact that technology can have.10 mins 23 secs Successful business use cases deploying computer vision across sports, shopping malls and even power line maintenance. 12 mins 25 secs The 50 machine learning models for video, text and structured data. the problems they've been built for and how Arcanum's models are different from GPT-3.14 mins 21 secs Narrow AI.15 mins 45 secs The Arcanum AI full stack team that goes from research through to support and maintenance. 17 mins 41 secs Finding the next customer. 20 mins 17 secs New Zealand's software skills and the advantage of dealing with a broader range of business problems.23 mins 8 secs Data security, privacy and who owns the algorithm. 27 mins 52 secs Trusting algorithms or humans?33 mins 7 secs Why AI isn't like the movies, how it's more accessible than ever and why now's the time to start exploring it's capability for any sector.

Manual Work is a Bug
Artificial Intelligence: General, Narrow and Open AI

Manual Work is a Bug

Play Episode Listen Later Sep 7, 2020 25:09


In this episode Matthew is joined by Mauricio Gomes, CTO of Mav to talk about all things Artificial Intelligence. Mauricio breaks down the basics and differences between General vs Narrow AI, AGI, and Open AI's latest release, GTP-3. They share some of their favorite Open AI demos so far, use cases and benefits for businesses who want to utilize Open AI-primed NLP models. Show Notes and Links From the Episode: hiremav.com/podcast/artificial-intelligence-general-narrow-and-open-ai Have questions? Leave us a message on Anchor or reach out to us on social media. Find Matthew: Instagram | Linkedin | Twitter Find Mauricio: Linkedin | Twitter | Github --- Send in a voice message: https://anchor.fm/manualworkisabug/message

The Artists of Data Science
AI Through The Ages | Djamila Amimer, PhD

The Artists of Data Science

Play Episode Listen Later Jul 20, 2020 55:30


On this episode of The Artists of Data Science, we get a chance to hear from Djimila Amimer, an experienced business leader and entrepreneur with a broad range of experience across multiple domains. She is the CEO and founder of Mindsenses Global, a management consultancy specializing in artificial intelligence with a mission to help businesses and organizations apply A.I. and unlock its full potential. Djimila shares with us her journey into the field of A.I, and some of the concerns she sees the field facing within the next few years, along with the applications of A.I. expanding to other businesses and organizations. She also highlights important soft skills everyone should develop, and advice for women in tech. This episode is packed with tips from an expert in A.I.! WHAT YOU'LL LEARN [8:32] Biggest concerns for data scientists within the next few years [16:56] Ethical concerns that data scientists should understand with general A.I [21:24] How A.I. can help in the fight against COVID-19 [27:10] Djimila's work with Mindsenses Global [32:42] Advice on how to become an entrepreneur QUOTES [33:17] “...your journey is going to be lonely. So you have to have a lot of resilience to be able to sustain yourself and you grow your business…” [34:02] “I believe that if you want to do it, if you really, really want to and you believe in it...you will succeed no matter what…” [35:28] “…you have to be able to adapt to a changing environment.” FIND DJAMILA ONLINE LinkedIn: https://www.linkedin.com/in/dr-djamila-amimer-142662137/ Twitter: https://twitter.com/mind_senses Website: https://mindsenses.co.uk/ SHOW NOTES [00:01:21] Introduction for our guest today [00:03:15] Talk to us a bit about how you got involved with the field of artificial intelligence, what drew you to the field? [00:03:48] Where do you see the field of A.I. headed to the next two to five years? What do you think is going to be the next wave of A.I.? [00:05:09] A historical tour through the three waves of A.I. [00:07:07] What do you think separates the great Data scientists from the good ones? [00:08:26] What do you think are going to be some of the biggest concerns that a Data scientist will face in the next two to five years? [00:10:13] Narrow AI, General AI, and the future of AI [00:16:43] The ethical concerns Data scientists will face as AI evolves [00:21:19] How can AI be used to help us fight this Covid-19 pandemic? [00:24:57] Do you think that we could use AI and machine learning to identify or at least predict the next pandemic? [00:25:30] Which one of your research works do you think is most relevant to our current times and can you maybe make the connection for us? [00:27:02] A deep diver into the work that Dr. Amimer does at Mind Sense Global [00:32:28] Tips for anyone who is thinking of becoming an entrepreneur [00:33:44] How to cultivate an entrepreneurial mindset [00:35:32] Data science entrepreneurship opportunities in the COVID world [00:38:05] The soft skills you need to standout [00:41:13] How can a student with nothing but a laptop and an Internet connection to use AI for good? [00:44:22] Advice for women in STEM [00:46:11] What can the Data community do to foster the inclusion of women in STEM? [00:48:27] What's the one thing you want people to learn from your story? [00:49:12] The lightning round Special Guest: Djamila Amimer, PhD.

Economics For Business
Steve Phelan Explains Why Entrepreneurial Intelligence Beats Artificial Intelligence

Economics For Business

Play Episode Listen Later Jun 2, 2020


Key Takeaways and Actionable Insights What is Entrepreneurial Intelligence? For Steven Phelan, “It's all about the spark” — the moment of inspiration in combining disparate elements together to develop a new solution. Humans draw on “the fringes of consciousness” to create new constructs. Entrepreneurs also take risks, investing time, talent and treasure in their venture in hopes of gain, yet understanding that they could lose something of value to them in the endeavor. How do we contrast Entrepreneurial Intelligence and Artificial Intelligence? First, we need to differentiate between the narrow and general forms of AI. Narrow AI is software that can solve problems in a single domain. For example, a Nest thermostat can raise the temperature or lower it in a room according to a pre-set rule. “If this, then that” is the general rule for this kind of intelligence. The parameters are designed by the programmers. For the unstructured problems of life and business, a truly intelligent computer would have to figure out for itself what is important. Part of the problem is that understanding or predicting human motivations — as entrepreneurs do — requires a “theory of mind”, an understanding of what makes humans tick. Entrepreneurs need empathic accuracy — unavailable to AI — to anticipate the needs of consumers. A sentient computer would need self-awareness or consciousness to truly empathize with humans, and have a set of values with which to prioritize decisions. What's the role of machine learning? If you work in a business that generates a lot of data, it can be mined by data scientists for patterns, and those patterns might indicate a better way to respond to customer needs. The richest source of data is behavioral — like choosing songs to listen to on Pandora. Machine learning can detect a pattern of what kinds of sings a user chooses most. A human interpreter can translate those patterns into preferences — in other words, motivations are embedded in behavior and machine learning can help entrepreneurs extract them. So, the entrepreneur's best resource is entrepreneurial intelligence. The psychologist Howard Gardner helped us to recognize many types of intelligence, including math, language, spatial, musical and social. There are two types that might be indicative of entrepreneurial intelligence: EQ (Emotional intelligence) might be associated with intensified empathic skills and empathic accuracy; CQ (Curiosity Intelligence) is linked to the kind of creativity that finds solutions by combining elements on the “fringes of consciousness”, as Hubert Dreyfus puts it. Can entrepreneurs and business owners assess their own entrepreneurial intelligence? There are scales to measure EQ and Creativity. Here's a link to an entrepreneurial quotient assessment: Mises.org/E4E_68_QA And here is a more action-oriented self-assessment we developed for E4E: Mises.org/E4E_68_SA The bottom line: Entrepreneurs need knowledge of how to profitably satisfy customer preferences given the resources at hand. This is not a trivial requirement. It is not possible to pre-state all of the uses for a given resource nor to compute the payoff for a given application. Current computational methods are thwarted without a complete list of entrepreneurially valid moves and the payoffs from such moves. No amount of growth in processing power, data communication, or data storage, can solve this problem. The late Steve Jobs is often held up as the epitome of a successful entrepreneur. His founding of Apple, ousting by his own board, and subsequent return to rescue the company, and then make it the most valuable publicly traded company in the world is the stuff of legend. One of the apparent secrets of his success was to understand that “people don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page.” This ability to “read things that are not yet on the page” lies at the heart of the concept of empathic accuracy. Empathic accuracy is “the ability to accurately infer the specific content of other people's thoughts and feelings”. Until AI can do this, Entrepreneurial Intelligence is a better tool for the innovating entrepreneur. Additional Resources "Entrepreneurial Intelligence vs. Artificial Intelligence" (PDF): Mises.org/E4E_68_PDF "Entrepreneurial judgment as empathic accuracy: a sequential decision-making approach to entrepreneurial action" by Jeffrey S. McMullen (PDF): Mises.org/E4E_68_Article "Are you ready to be an entrepreneur?" (PDF): Mises.org/E4E_68_QA "Entrepreneurial Self-Assessment" (PDF): Mises.org/E4E_68_SA

Interviews
Steve Phelan Explains Why Entrepreneurial Intelligence Beats Artificial Intelligence

Interviews

Play Episode Listen Later Jun 2, 2020


Key Takeaways and Actionable Insights What is Entrepreneurial Intelligence? For Steven Phelan, “It's all about the spark” — the moment of inspiration in combining disparate elements together to develop a new solution. Humans draw on “the fringes of consciousness” to create new constructs. Entrepreneurs also take risks, investing time, talent and treasure in their venture in hopes of gain, yet understanding that they could lose something of value to them in the endeavor. How do we contrast Entrepreneurial Intelligence and Artificial Intelligence? First, we need to differentiate between the narrow and general forms of AI. Narrow AI is software that can solve problems in a single domain. For example, a Nest thermostat can raise the temperature or lower it in a room according to a pre-set rule. “If this, then that” is the general rule for this kind of intelligence. The parameters are designed by the programmers. For the unstructured problems of life and business, a truly intelligent computer would have to figure out for itself what is important. Part of the problem is that understanding or predicting human motivations — as entrepreneurs do — requires a “theory of mind”, an understanding of what makes humans tick. Entrepreneurs need empathic accuracy — unavailable to AI — to anticipate the needs of consumers. A sentient computer would need self-awareness or consciousness to truly empathize with humans, and have a set of values with which to prioritize decisions. What's the role of machine learning? If you work in a business that generates a lot of data, it can be mined by data scientists for patterns, and those patterns might indicate a better way to respond to customer needs. The richest source of data is behavioral — like choosing songs to listen to on Pandora. Machine learning can detect a pattern of what kinds of sings a user chooses most. A human interpreter can translate those patterns into preferences — in other words, motivations are embedded in behavior and machine learning can help entrepreneurs extract them. So, the entrepreneur's best resource is entrepreneurial intelligence. The psychologist Howard Gardner helped us to recognize many types of intelligence, including math, language, spatial, musical and social. There are two types that might be indicative of entrepreneurial intelligence: EQ (Emotional intelligence) might be associated with intensified empathic skills and empathic accuracy; CQ (Curiosity Intelligence) is linked to the kind of creativity that finds solutions by combining elements on the “fringes of consciousness”, as Hubert Dreyfus puts it. Can entrepreneurs and business owners assess their own entrepreneurial intelligence? There are scales to measure EQ and Creativity. Here's a link to an entrepreneurial quotient assessment: Mises.org/E4E_68_QA And here is a more action-oriented self-assessment we developed for E4E: Mises.org/E4E_68_SA The bottom line: Entrepreneurs need knowledge of how to profitably satisfy customer preferences given the resources at hand. This is not a trivial requirement. It is not possible to pre-state all of the uses for a given resource nor to compute the payoff for a given application. Current computational methods are thwarted without a complete list of entrepreneurially valid moves and the payoffs from such moves. No amount of growth in processing power, data communication, or data storage, can solve this problem. The late Steve Jobs is often held up as the epitome of a successful entrepreneur. His founding of Apple, ousting by his own board, and subsequent return to rescue the company, and then make it the most valuable publicly traded company in the world is the stuff of legend. One of the apparent secrets of his success was to understand that “people don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page.” This ability to “read things that are not yet on the page” lies at the heart of the concept of empathic accuracy. Empathic accuracy is “the ability to accurately infer the specific content of other people's thoughts and feelings”. Until AI can do this, Entrepreneurial Intelligence is a better tool for the innovating entrepreneur. Additional Resources "Entrepreneurial Intelligence vs. Artificial Intelligence" (PDF): Mises.org/E4E_68_PDF "Entrepreneurial judgment as empathic accuracy: a sequential decision-making approach to entrepreneurial action" by Jeffrey S. McMullen (PDF): Mises.org/E4E_68_Article "Are you ready to be an entrepreneur?" (PDF): Mises.org/E4E_68_QA "Entrepreneurial Self-Assessment" (PDF): Mises.org/E4E_68_SA

What´s Next - Rethinking Architecture
Podcast 20200517-How far away are we from Super Intelligence AI?

What´s Next - Rethinking Architecture

Play Episode Listen Later May 17, 2020


General Artificial Intelligence Podcast 20200517 In this podcast Therese talks about General Artificial Intelligence. Giving a definition of Narrow AI and Super Intelligence AI. Talking about the documentary “ We need to talk about AI” by James Cameron. Discussing questions from the movie: How far away is General Artificial Intelligence? How far away is consciousness? Are we ready to hand over control to machines? Can we co-evolve? Is it possible to put ethics into machines? The Trolley Problem

ITSPmagazine | Technology. Cybersecurity. Society
Advanced Technology And Its Integration With Our Way Of Life #1 | With Stephen Wu And Keith Abney

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 12, 2020 46:30


The Cyber Society | An ITSPmagazine Column Advanced Technology And Its Integration With Our Way Of Life #1 | With Stephen Wu And Keith Abney Hosts: Marco Ciappelli & Sean Martin Guests: Stephen Wu And Keith Abney Hello humans, robots, intelligent or stupid beings, and everything in between. I welcome you all to the Cyber Society of Today—a wondrous place where 'what' is a possibility, 'how' is full of options, and 'when' is a mystery. Despite what you may think, this is a real place. It is here, it is now, and most certainly you are in it. So, buckle up, be open-minded, and enjoy the ride—the doors are locked, and there is no place to hide. In this podcast, Sean and I are following up on an exciting story that we started during one of the panels we hosted at the RSA Conference in San Francisco a few weeks ago. This is the first of two (perhaps many more) stories on ITSPmagazine’s Cyber Society about advanced technology and its integration with our way of life. Or, is it the other way around? Well, let's get started and see where we end up with this. This first episode is about the present status of advanced technology, and we start by defining and interpreting some of the terms that are part of this discussion. We look at where we are, how we got here, and what problems we're facing right now as we begin to live in this cyber society. Obviously, advanced technology is not just a vision for the future. It is about the present and about the fact that robots and artificial intelligence are among us and we are already working and living with them. Today, machine intelligence is not the same as human intelligence. Will it ever be? What is the difference between human and machine cognition? What is the definition of intelligence? To answer these—and many more—questions and to make us think and prepare to have more conversations in this topic, we are navigating and exploring possibilities with a legal expert and a philosopher. Stephen Wu is an AI/Technology attorney, former chair of the American Bar Association Science and Technology Law Section, and started the very first American Bar Association Wide National Institute on artificial intelligence and robotics. Keith Abney teaches at Cal Poly State University. Philosopher and Senior fellow at the Ethics and Emerging Sciences Group (a non-partisan organization focused on the risk, ethical, and social impact of emerging sciences and technologies). He is also the co-editor of robot ethics and robot ethics 2.0. If intelligence is what enables a cognitive goal to be achieved, many machines are far more intelligent than humans already. Still, if we move from a narrow to a general-purpose use of artificial intelligence (AI), then things change quite a bit. Narrow AI is the attempt to achieve a narrow cognitive goal. We are there. General AI—or human intelligence—is something that, at the moment is, at best, in tomorrow's land. We will talk about that reality next time on the Cyber Society, while we will also discuss the future of advanced technology and how today's decisions can make the difference between something that could resemble either a utopia—or a dystopia. I am personally betting on something in between, but let's see if human intelligence will prove to be much better than what I think it is. Our goal here at ITSPmagazine is to have conversations that leave people thinking once the podcast music fades out. ____________________ This episode of The Cyber Society is made possible by the generosity of our sponsors, Nintex. Be sure to visit their directory pages on ITSPmagazine - Nintex: https://www.itspmagazine.com/company-directory/nintex _______________________________________________ Listen to the second episode recorded on The Future Of The Future Column: https://www.itspmagazine.com/the-future-of-the-future _______________________________________________ To catch more stories in The Cyber Society: https://www.itspmagazine.com/the-cyber-society

Techno-Biotic Podcast
Episode 5 - Dan Turchin and the Intersection of AI and Humanity

Techno-Biotic Podcast

Play Episode Listen Later Mar 4, 2020 53:25


In this episode, Laura and Shane talk with Dan Turchin, the founder and CEO of Astound.AI - https://astound.ai/We touch on a variety of areas in this discussion including:Which jobs are best suited for AI and what jobs will continue to require humans to perform them? Why AI is fundamentally about making the lives of humans better? What is coming in the next 2 years related to how we use and interact with AI? How AI is currently impacting how people work today? How do we redefine human value beyond our ability to trade our time for a wage. The difference between Artificial General Intelligence (AGI) and Narrow Artificial Intelligence and why the current usages we see today is more focused on Narrow AI? - https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22 Where does empathy and curiosity come into play between humans and AI? Is it too late to re-skill the workforce to support the future? - https://www.forbes.com/sites/tomtaulli/2019/05/04/how-to-reskill-your-workforce-for-ai-artificial-intelligence/#78531eae3eb9 Dan introduces us to the concept of “Dull, Dirty, or Dangerous” as the litmus test for where there is an opportunity for machines to do a job that a human is currently performing. - https://www.robotics.org/blog-article.cfm/How-Robots-Are-Taking-on-the-Dirty-Dangerous-and-Dull-Jobs/209 Whether we should fear that AI will take jobs away from people. Whether AI will eventually decide that humans are imperfect and shouldn’t exist and why some technologists fear this outcome. - https://www.youtube.com/watch?v=Ra3fv8gl6NE How low cost compute power has democratized access to AI and Machine Learning. Topics coming out of the recent British Standards Institute conference on AI. - https://shop.bsigroup.com/Navigate-by/Conferences/Conferences/Now-Booking/The_Digital_World_Artificial_Intelligence/BSI_Event_TheDigital_WorldArtificialIntelligence/--- Send in a voice message: https://anchor.fm/techno-biotic/messageSupport this podcast: https://anchor.fm/techno-biotic/support

Self-taught Novice
Heated Debate on If The Negatives of AI Out way the Positive Impact – Self-taught Novice Podcast #S02E03

Self-taught Novice

Play Episode Listen Later Oct 14, 2019 57:14


Welcome to this week’s episode of the Self-taught Novice Podcast. This week’s episode is a debate on AI; as to whether the merits outweigh the demerits. It is an extension of last week’s episode on Algorithms and AI - its driven features and what it holds for Ghana and Africa. If you have not already, it’d be best to listen to last week’s episode of the Self-taught Novice Podcast so you can follow the discussion. Click here So why say AI’s merits outweigh its demerits? From the last episode, we established that AI drives the industrial revolution of this age. With AI, industries have been made better and more effective and efficient and as industries run economies and human civilizations, this translates to the overall betterment of humanity. Take the example of the inception of the steam engine which initiated the use of machines in factories. There was a lot of scepticism about the introduction of the steam engine with the fear that it would threaten livelihoods. The steam engine in the midst of all these went ahead to redefine manufacturing in its age, reducing human labour and increasing system effort. Similarly, in this age, AI is leading the industrial revolution. AI is connecting man with machine. From this submission we are to accept that AI would definitely face scepticism and opposition in its initial stages. The danger with AI is that it is fast progressing at an exponential rate which we cannot keep up with. Google’s Deep Mind technology was able to beat the world’s best Go player with top speed after being fed tons of information. Go has multiple permutations, and there was no track of how the AI managed to win. However, this is being misconstrued as a benefit. All systems are susceptible to going rogue. AI is no exception; it is not perfect. In areas where AI is used to manage essential services such as power generation and delivery, the control of fuel and water pipelines, and water delivery, with any mishap, the scale of tragedy would be catastrophic. It is questionable to abandon important aspects of our lives to a technology we cannot fully understand and keep up with. We must tread with caution.  (There are two types of AI; Narrow AI and General AI. Narrow AI is generally not as smart and consists of the little things like how your phone camera auto focuses on objects when you’re taking a picture. General AI is more advanced and smarter. It includes IBM Watson and Google’s Deep Mind technology. It’s the kind of futuristic technology often portrayed in movies; the flying cars and the like. This conversation is focused on General AI.) AI is the replication of human intelligence on machines. We may not fully understand AI, because we are still trying to understand human intelligence to replicate artificial systems that can be run on human understanding. The fundamental algorithms are still being understood. AI systems can be likened to human beings, even though we may be aware of the use of our body parts, we cannot fully predict our thought patterns. AI is not predictable and will solve problems in ways we do not expect. The danger is that we cannot track AI. There is no traceable path as to how it works and functions. Neural networks think on their own. AI is not controlled. It seems the concept is that we abandon all our control to AI; that is the expectation we have come to have. The threat of AI having access to all our information is high. In spite of the numerous benefits of AI, if it goes uncontrolled and unlegislated it can have devastating repercussions. Legislation cannot keep up with AI. How can we keep up with something we do cannot understand? Social media manipulation and fake news are a result of this. Legislation would eventually catch up with AI. Everything evolves with time. If we are to compare the first car model manufactured, the Ford Model 3, with the cars of today, you would notice that model was very unsafe for use. We cannot compare AI with cars.

Makihub
Narrow AI: Warum K.I. keinen Kaffee machen kann.

Makihub

Play Episode Listen Later Jun 20, 2019 22:23


Der Makihub: Dein wöchentliches Briefing zum Thema Künstliche Intelligenz. Hier serviere ich dir Themen rund um Künstliche Intelligenz, jedoch einfach zu verdauen und ohne viel Fachchinesisch. So wie Sushi, nur für das Hirn! Diese Woche habe ich meinen ersten Gast eingeladen. Der Sebastian ist Musikproduzent und er trainiert eine Künstliche Intelligenz. Wir thematisieren, was “Machine Learning” bedeutet, lernen was ist eine “Narrow AI” bzw. eine “General AI” und diskutieren ganz ausführlich darüber, welche Tests es gibt, um herauszufinden, ob es sich um eine “General AI” handelt. Dabei stellen wir fest, wie intelligent wir Menschen eigentlich sind, weil wir Kaffee machen können! ;) Klingt spannend? Dann Hör dir den Podcast an, oder lese das Transkript der Sendung auf makihub.net. Wenn du Fragen oder Anregungen hast oder dich einfach nur austauschen möchtest, dann schicke mir gern eine Nachricht!So kannst du mich kontaktieren: E-Mail - viktoria@makihub.net Instagram - @_makihub_ Die hat die Folge gefallen? Es würde mir sehr viel bedeuten, wenn du mir eine Review hinterlässt! Über Makihub: Der Makihub ist der Podcast der dir Künstliche Intelligenz in kleinen schmackhaften Häppchen serviert. Ohne viel technisches Fachchinesisch erforschen wir gemeinsam, wie die neuen Technologien rund um K.I. unser Leben und Denken verändern werden. In einem wöchentlichen Briefing thematisiere ich aktuelle Gespräche zu Künstlicher Intelligenz und teile meine Gedanken dazu mit dir. --- Send in a voice message: https://anchor.fm/makihub/message

The Marketing Agency Leadership Podcast
Artificial Intelligence: Trust the Machine to Master Marketing Automation

The Marketing Agency Leadership Podcast

Play Episode Listen Later May 7, 2019 27:29


Magnus Unemyr, Marketing Automation Expert, helps companies install and set up website-based marketing automation systems. He recommends using site lead magnets (calls to action, buttons, banner ads), caging some content behind landing pages with registration forms, offering incentives to register information e.g., a PDF that can only be downloaded in exchange for contact information, and adding nurturing sequences with follow-up emails. (He uses Tripwire.) Each piece of content should drive the customer or potential customer one step closer to making the purchase. In this interview, Magnus clarifies the difference between today's narrow AI software and strong AI. Narrow AI learns from data and self optimizes over time by iteratively improving its original function. Narrow AI in email application might learn the optimal time to deliver an email to  an individual customer to increase the likelihood of that email being opened. Strong AI has the capability to learn things outside of its originally targeted function . . . to reason on its own, to start to get feelings . . . which is still the stuff of science fiction. Traditional marketing automation systems; e.g., HubSpot, ActiveCampaign, or Act-On, are fairly basic in autonomous decision-making. Magnus sees a natural progression from using AI-powered algorithms/marketing automation systems to automatically harvest data, eventually moving toward more complex functions – developing “insights,” and, eventually, making autonomous decisions about and triggering individualized marketing outreach initiatives without human intervention. This future AI will be able to do highly personalized outreach – to send the right content to the right person at the right time, in the right channel, and at the right frequency – improving the customer experience and minimizing the “spamminess” of cookie-cutter automatic responses. However, when deploying new marketing automation systems, companies need to budget for content production, and each piece of content should send the reader or viewer one step further toward the purchase. Also, major marketing automation tools are not standalone solutions – they need to be integrated with additional specialized smaller marketing automation systems – webinar platforms, proprietary databases, etc. AI can fail. For instance, AI algorithms are trained by historical data. Without enough historical data, any AI system will not be able to make accurate decisions. If the data is skewed or flawed, the AI algorithm will produce flawed predictions or behavior. The software supplier needs to be able to prove that the software is behaving and producing accurate results. Magnus describes his book, Data-Driven Marketing with Artificial Intelligence, as the definitive guide to understanding and using AI in marketing. He has also written: Mastering Online Marketing, Internet of Things, Turn your Knowledge and Skills into a Profitable Online Business, and eBooks and Beyond. Links are to Amazon. Every marketing automation system vendor provides video courses to teach users about their product, but invariably fails to discuss the features that should have been included . . . but weren't. Magnus created a detailed marketing automation course (about 70 videos) that explains concepts, strategies, use, and what comprises a good marketing automation system – without covering proprietary instructional material. The credit-card accessed course is available on his website at: http://unemyr.com. Magnus can be reach on LinkedIn or on his website at: http://unemyr.com, where you can find his blog, his video episodes, and his training courses.

AI in Action Podcast
E03 Peter Voss, Founder, CEO & Chief Scientist at AGI Innovations & Aigo.ai

AI in Action Podcast

Play Episode Listen Later Dec 4, 2018 18:14


Today’s guest on the show is Peter Voss, who is the co-founder, CEO and Chief Scientist at AGI Innovations and Aigo.ai. Peter launched Aigo.ai to commercialize our second-generation intelligence engine both to the consumer via the AigoToken Ecosystem and directly to enterprise customers. Aigo’s proprietary cognitive architecture implements the ‘Third Wave of AI’, providing highly intelligent, hyper-personalized assistants that have deep understanding of natural language, have short- and long-term memory, learn interactively, use context, and are able to reason and hold meaningful ongoing conversations. In this episode, Peter will share with you: Some of his interesting work over the years How he has seen Artificial Intelligence evolve Moving from Narrow AI into Artificial General Intelligence Advice on how to overcome challenges in your AI career Adding context to Conversational AI The importance of privacy in the ever-growing data world What the future holds for AI and Machine Learning

Fintech Impact
FinTech 101 with Guy Anderson (Guest Host) | EP20

Fintech Impact

Play Episode Listen Later May 24, 2018 46:16


In this 20th episode of the Fintech Impact podcast, Jason Pereira is actually the one getting interviewed this time by his colleague, financial advisor Guy Anderson. The goal will be to identify and define the acronyms, nomenclature, and tech speak that is typically used in Fintech Impact, and in the industry at large. Consider this Fintech 101. ●02:38 – API: stands for application protocol interface. APIs are rules or a language that a company puts out there for something else to talk to its programs.●05:10 – AWS: stands for Amazon Web Services for cloud computing. Companies that use AWS include: Netflix, and Dropbox.●08:37 – GDPR stands for General Data Protection Regulation, a series of data regulations and digital rights established by the European Union. ●17:36 – Whatever you put online, consider there forever. Stupid things posted in the past could prevent you from getting certain jobs in the future.●18:02 – Platforms: technological systems that allows other people to build other functions over top of it.●20:31 – Narrow AI is artificial intelligence that essentially focuses on one task, like Apple Siri. Machine learning is throwing a ton of data at a computer system for it to mine and look for patterns of recognition that the human mind can’t recognize—teaching itself to learn as new data comes in.●27:00 – Blockchain: the underlying architecture and code of every cryptocurrency that exists, creating a timestamp and transaction data that is resistant to modification of the data, and is an open, distributed ledger that can record transactions between two parties.●34:11 – Cryptocurrency transactions aren’t instantaneous but they are ultra-fast compared to bank transactions: which are “controlled ledgers.”●34:49 – So much of the financial system before cryptocurrency has been based on trust.●35:50 – You have to convert money into cryptocurrency coins, transfer those coins to who you are doing your transaction with, and then they convert it back to money again.●37:36 – You can send money anonymously from other users with cryptocurrency anywhere in the world because they are sent to private keys, and there are also public keys.●37:50 – People can create their own cryptocurrencies relatively easily.●42:20 – Bitcoin has implications for impacting anti-money laundering. 3 Key Points:1. API: stands for application protocol interface. APIs are rules or a language that a company puts out there for something else to talk to its programs.2.You have to convert money into cryptocurrency coins, transfer those coins to who you are doing your transaction with, and then they convert it back tomoney again.3.You can send money anonymously with cryptocurrency anywhere in the world because they are sent to private keys, and there are also public keys.Podcasts that explain Crypto & Blockchainhttps://tim.blog/2017/06/04/nick-szabo/http://investorfieldguide.com/hashpower/ Tweetable Quotes:-“APIs allow integration across different modules.” – Jason Pereira.-“I think I once saw a survey that something between 40-60% of all cloud services offered on the internet are offered through AWS .” – Jason Pereira.-“Europe typically looks at it (technology) from a consumer-first standpoint. As opposed to the North American attitude of looking at it from a business-first standpoint.” – Jason Pereira. Resources Mentioned:●LinkedIn – Jason Pereira’s LinkedIn●Facebook – Jason Pereira’s Facebook●Woodgate Financial – Website for Woodgate Financial See acast.com/privacy for privacy and opt-out information.

B2B Marketing and More With Pam Didner

Recently, I did a keynote presentation about Humans vs. Machines.  Is Content Marketing doomed?  When I think about humans vs. machines, I think about the self-aware artificial intelligence like the Terminator’s Skynet, I am the Borg from the Star Trek or the agents in the movie, the Matrix. Of course, R2D2, C3PO from Star War or even Bender from Futurama also come to mind.  However, these are Hollywood versions of AI. In real life, we have seen a Robot called Asimo developed by Honda kick a soccer ball to President Obama.  We have seen sophisticated robots which can flip or walk on uneven and hilly terrane from Boston Dynamics. Not mentioning about automatic cars developed by Google and other tech and car companies. We are developing these AI-Based tools to make our lives easier and productive. However, in January 2015, more than 7000 people from tech, academia, and other fields signed a 4-page open letter based on the report with the title: “Research Priorities for Robust and Beneficial Artificial Intelligence”.  They acknowledged the immense benefits of AI, but cautioned policymakers and the public about the danger of self-aware and self-adjust artificial intelligence beings. They also articulated AI’s impact in aspects such as security, machine ethics, laws, and privacy.  Luminaries such as Elon Musk and Stephen Hawking both signed the open letter. Elon Musk still affirmed his point of view that AI is more dangerous than nukes.  Stephen Hawkings expressed concerned that AI could destroy human civilization. So, what is AI?  Well, there are many different interpretations, different levels and different perspectives of AI. So, let’s define AI first. I love this definition from Wikipedia: Intelligence exhibited by machines. What is Intelligence, anyway?  Intelligence is machine can think and behave like humans. Let’s peel the onion further.  What is thinking?  Thinking has many levels. Machines can think about how to play a chess.  Machines can also think how to adjust his behavior to assist humans like Data in Star Trek. The different levels of thinking or behavior also define different levels of AI. In general, there are three types of AI: Artifical Narrow Intelligence, Weak AI or Narrow AI: Do a task competently, modify behaviors when the situation changes. Google translate, Alphago, Siri, IBM Watson, Autonomous cars are all examples of narrow AI. Artificial General Intelligence: Strong AI or human-like AI: Perform any intellectual task that humans can Capable of cognitive functions humans may have – in essence no different than a real human mind Understand and reason its environment as humans would ASI: AI becomes much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills On an AI continuum: it starts from Narrow AI, AGI, then ASI. Currently, all the applications are task-driven, we are on the cusp of narrow AI. There is no human-like AGI yet, but experts expect that will happen in the next decades, especially we keep working on to make machines smarter and taking on more complex tasks. The general consensus is that it will take a long time to get from narrow AI to AGI, but from AGI to ASI, the transition will be in no time.  When machines are smarter than humans, what will happen to us? That’s what Elon Musk and Stephen Hawkings are concerned about.  In the meantime, many companies are developing AI-based tools to assist marketing efforts.  I talked about Lead Crunch feeds customers’ best top 25 customers to its AI-based platform to create ICP for their customers.  Drift offer AI-based bot chat to carry intelligence conversations with potential prospects to get audiences to schedule a time with the company. Salesforce.com created Einstein, AI-based platform. It learns from all that data to deliver predictions and recommendations based on your unique business processes. McCann Japan developed an AI-CD bot. Use the bot to develop a creative concept for its commercial.  Their first pilot project was creating a commercial for Clorets, chewing gum to convey Instant fresh breath that lasts for 10 minutes.” Then, they tested human-generated commercial with an AI-generated commercial.  The human-generated commercial did slightly better than AI-based commercial.  Imagine that AI Bot getting better every time… Here is AI-based music.  Pretty good, right… Are marketers doomed?  I don’t think so.  If you think about it, we still need to set up processes or workflow before we can take full advantage of AI.  We still need to do quality checks. As of today, AI still need to hand whatever they accomplish for humans to complete. However, it’s important to stay on top of the trends and acquire new skillsets.  Learning is part of every marketer’s job description. I thought it’s important for us, marketers, understand what AI is and what they offer at this time.

digital kompakt | Business & Digitalisierung von Startup bis Corporate
Ist Künstliche Intelligenz (KI) overhyped? | Black Box: Tech #8

digital kompakt | Business & Digitalisierung von Startup bis Corporate

Play Episode Listen Later Mar 7, 2017 48:56


In dieser Crossover-Ausgabe des Black Box: Tech-Podcasts diskutieren Joel Kaczmarek und Johannes Schaback gemeinsam mit dem Hardware- und KI-Investor Fabian Westerheide über die Frage, wie der derzeitige Hype um Künstliche Intelligenz zu bewerten ist, auf welchem technischen Stand wir uns tatsächlich befinden und auf welcher Ebene zukünftige Diskurse geführt werden müssen. Du erfährst... 1) …was die Begriffe Narrow AI, AGI und Neuronale Netze bedeuten 2) …welche Gefahren hinter Künstlicher Intelligenz lauern 3) …ob KI over- oder underhyped ist 4) …wie sinnvoll eine zentrale Super-KI ist

digital kompakt | Business & Digitalisierung von Startup bis Corporate
Künstliche Intelligenz: Funktionsweise, Anwendung und Marktlage | Hardware & KI #2

digital kompakt | Business & Digitalisierung von Startup bis Corporate

Play Episode Listen Later Aug 24, 2016 55:23


Wo steht Künstliche Intelligenz heute? Was lässt sich mit KI alles umsetzen? Wo liegen ihre Risiken und wie stehen wir Deutschen im Bereich KI dar? Dazu haben wir mit dem KI-Experten Ronnie Vuine gesprochen, der als Gründer von Micropsi Industries die Marktlage von KI bestens kennt und uns auch Sonderformen wie Narrow AI, AGI, Neuronale Netze und Deep Learning erklärt hat. Du erfährst... 1) …wo Künstliche Intelligenz heute steht und was sie umsetzen kann 2) … was von denkenden Computern in den nächsten Jahren erwartet werden darf 3) …wie Deutschland als Markt bei KI positioniert ist 4) …welche Risiken sich mit Künstlicher Intelligenz verbinden