POPULARITY
Hosts Simon and Jillian discuss how you can uncover hidden trends and make data-driven decisions - all through natural conversation, with Amazon Q in Quicksight, plus, more of the latest updates from AWS. 00:00 - Intro, 00:22 - Top Stories, 02:50 - Analytics, 03:35 - Application Integrations, 04:48 - Amazon Sagemaker, 05:29 - Amazon Bedrock Knowledge Bases, 05:48- Amazon Polly, 06:46 - Amazon Bedrock, 07:31 - Amazon Bedrock Model Evolution LLM, 08:29 - Business Application, 08:58 - Compute, 09:51 - Contact Centers, 10:54 - Containers, 11:12 - Database, 14:21 - Developer Tools, 15:20 - Front End Web and Mobile, 15:45 - Games, 16:04 - Management and Governance, 16:35 - Media Services, 16:47 - Network and Content Delivery, 19:39 - Security Identity and Compliance, 20:24 - Serverless, 21:48 - Storage, 22:43 - Wrap up Show Notes: https://dqkop6u6q45rj.cloudfront.net/shownotes-20250404-184823.html
New game-changing AI developments are here, from SageMaker Unified Studio to Bedrock's new multi-agent capabilities. Join your hosts Simon and Jillian for the latest updates from AWS. 00:00:00 - Intro 00:00:49 - Top Stories 00:02:31 - Amazon Bedrock 00:05:35 - Analytics 00:06:08 - Application Integration 00:06:41 - AWS Step Function Workflow Studio 00:06:59 - Amazon Bedrock 00:07:26 - GraphRAG 00:09:08 - Amazon Nova Pro Foundation Model 00:09:32 - Amazon S3 Table and Sagemaker Lakehouse 00:12:00 - Compute 00:13:30 - Customer Engagement 00:14:39 - Data Bases 00:15:09 - Developer Tools 00:17:09 - End User Computing 00:17:25 - Front end Web and Mobile 00:18:08 - Games Internet of things 00:20:12 - Management and Governance 00:20:31 - Networking and Content Delivery 00:20:41 - AWS Application Load Balancer 00:21:06 - Security Identity End Compliance 00:22:32 - Storage 00:23:47 - Wrap up
This week on the Official AWS Podcast, dive into the rapidly evolving world of Large Language Model Operations (LLMOps)! Join the experts as our host Shruti interviews Piyush Kadam, Sr. Product Manager, and Lauren Mullennex, Sr. Solutions Architect at AWS to explore how Amazon SageMaker is revolutionizing the customization, evaluation, and deployment of large language models at scale. Learn more: https://aws.amazon.com/sagemaker/pipelines/
The annual AWS re:Invent conference in Las Vegas has long been a marquee event for technologists and business leaders. But in 2024, it served as a rallying cry for a new technological epoch - one where generative AI (GenAI) is no longer a nascent tool but a transformative force shaping industries, economies, and creativity. At the heart of this year's address was Dr. Swami Sivasubramanian, AWS's Vice President of AI and Data, who positioned Amazon's cloud division not just as a vendor but as an architect of this revolution. Dr. Sivasubramanian began with a historical overture, likening the current moment to the Wright Brothers' first flight in 1903. That 12-second triumph, he noted, was not an isolated miracle but the result of centuries of cumulative innovation - from Leonardo da Vinci's aeronautical sketches to steam-powered gliders. In the same vein, GenAI represents the culmination of decades of research in neural networks, backpropagation algorithms, and the transformative power of Transformer architectures. However, technological breakthroughs alone were not enough. What set the stage for GenAI's explosive growth, Dr. Sivasubramanian argued, was the convergence of cloud computing, vast data lakes, and affordable machine-learning infrastructure - elements AWS has spent the better part of two decades perfecting. AWS SageMaker: The Vanguard of AI Democratization Central to AWS's GenAI arsenal is Amazon SageMaker, a comprehensive platform designed to simplify machine learning workflows. Over the past year, AWS has added more than 140 features to SageMaker, underscoring its ambition to stay ahead in the arms race of AI development. Among these innovations is SageMaker HyperPod, which provides robust tools for training the mammoth foundational models that underpin GenAI. HyperPod automates complex tasks like checkpointing, resource recovery, and distributed training, enabling enterprises like Salesforce and Thomson Reuters to train billion-parameter models without the logistical headaches. But SageMaker is evolving beyond its core machine-learning roots into a unified platform for data analytics, big data processing, and GenAI workflows. The platform's latest iteration consolidates disparate tools into a single, user-friendly interface, offering businesses an integrated suite for data preparation, model development, and deployment. Training Titans: HyperPod and Bedrock As GenAI models grow in size and sophistication, the cost and complexity of training them have skyrocketed. Dr. Sivasubramanian introduced two pivotal innovations aimed at alleviating these challenges. First, HyperPod Flexible Training Plans address the inefficiencies of securing and managing compute resources for training large models. By automating the reservation of EC2 capacity and distributing workloads intelligently, these plans reduce downtime and optimize costs. Second, Bedrock, AWS's managed service for deploying foundational models, makes it easier for developers to select, customize, and optimize GenAI models. Bedrock offers cutting-edge features like Prompt Caching - a cost-saving tool that reduces latency by storing frequently used queries - and Intelligent Prompt Routing, which directs tasks to the most cost-effective model without sacrificing quality. Case Studies in Innovation Throughout his keynote, Dr. Sivasubramanian showcased real-world applications of AWS's GenAI capabilities. Autodesk, the software titan renowned for its design and engineering tools, is leveraging SageMaker to develop GenAI models that combine spatial reasoning with physics-based design principles. These models allow architects to create structurally sound and manufacturable 3D designs, effectively automating tedious aspects of the creative process. Meanwhile, Rocket Companies, a leader in mortgage lending, has deployed Amazon Bedrock to create AI agents that handle 70% of customer interactions autonomously. These agents, embedded in Rocket's AI-driven platform, streamli...
Send Everyday AI and Jordan a text messageThink AI is just gonna fade into the wind? Like.... once the ChatGPT hype dies down? Think again shorties. One of the world's largest companies just silently went all in on autonomous AI agents. What did Microsoft announce and what does it all mean? We'll break it all down. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on Microsoft AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Microsoft's Autonomous AI Agents2. Revolution of Autonomous AI Agents3. Rise of Autonomous AI Agents4. Autonomous AI Agent Competition5. Promising Future ProspectsTimestamps:02:45 Daily AI news05:25 Microsoft announces Copilot autonomous agents in London.08:34 Autonomous AI agents complete tasks without humans.10:48 Run continuously, triggered by real-time data connection.17:03 Listen to WorkLab Podcast for actionable insights.18:27 Microsoft AI event lacked visibility and attention.23:45 AI enhances understanding via a universal interface.25:12 OpenAI's reasoning model excels, revolutionizes human-like thinking.29:10 AI agent processes emails, plans actions autonomously.31:24 Microsoft's agent processes engagement leads using AI.34:46 Now autonomous, no duct tape or third-party.37:55 Agent Force similar to Microsoft's autonomous AI.Keywords:Jordan Wilson, Microsoft, Salesforce, AI technology, OpenAI, Anthropic, autonomous AI agents, AI tour in London, Satya Nadella, Microsoft 365 Graph, Dataverse, Fabric, WorkLab podcast, Effective Leadership and Adaptation, Rise of Autonomous AI Agents, Shift from Language Models, AI Agent Advancements, Universal Interface, Reasoning and Planning, Enhanced Memory and Context, Use Case - McKinsey, youreverydayAI.com, Copilot Studio, natural language, Salesforce Agent Force, AI-first companies, generative AI leadership, IBM Granite 3.0, ChatGPT Updates, Amazon SageMaker.
Daryl Martis of Salesforce and Rashna Chadha of AWS share how Amazon Bedrock integrates with Salesforce to enhance AI applications, highlighting the partnership's strategic benefits, AI model customization, and secure deployment options.Topics Include:Introduction to Amazon Bedrock and Salesforce partnership.Overview of Amazon Bedrock as an API-based generative AI service.The strategic collaboration between AWS and Salesforce.Salesforce Data Cloud and its integration with Amazon Bedrock.Overview of Salesforce Einstein and its use of Amazon SageMaker.Recent AI launches between Salesforce and AWS, including Slack AI and MuleSoft.Use cases of AI services like Amazon Textract within Salesforce.Bringing Your Own Large Language Model (BYO LLM) with Bedrock.Foundation models offered by Amazon Bedrock (Anthropic, Cohere, Llama).Overview of security, privacy, and compliance in Bedrock AI services.Salesforce Data Cloud's unified customer data and real-time AI capabilities.Bedrock's support for custom AI model evaluation and metrics.Consumption models in Bedrock: on-demand vs. provision throughput.Bedrock's agent capabilities for real-world applications like scheduling.Demo of using Amazon Bedrock models within Salesforce.Participants:Daryl Martis – Director of Product Management, Einstein AI - SalesforceRashna Chadha – AI/ML Specialist – Principal Solution Architect – Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
“Predictive analytics uses historical data, statistical algorithms, and machine learning to identify the likelihood of future outcomes. It helps anticipate customer behaviour, optimise marketing strategies, and improve ROI." Select Appropriate AI Tools: Choose AI tools like IBM Watson, Google Cloud AI, or Amazon SageMaker for predictive analytics based on your specific needs and capabilities. Prepare and Analyse Data: Collect and preprocess your data using tools like Trifacta, and apply relevant predictive algorithms to uncover actionable insights. Implement and Optimise: Deploy your predictive models, monitor their performance, and continuously improve them with platforms like H2O.ai to enhance customer engagement and marketing ROI. Cambridge Marketing College http://marketingcollege.com Neil Wilkins Online https://neilwilkins.online/category/metaverse-marketing/
Amazon Web Services (AWS) announced the AWS Public Sector Generative Artificial Intelligence (AI) Impact Initiative. The two-year, $50 million investment is designed to help public sector organisations - and those that directly support their technology needs - to accelerate innovation in support of critical missions using AWS generative AI services and infrastructure, such as Amazon Bedrock, Amazon Q, Amazon SageMaker, AWS HealthScribe, AWS Trainium, and AWS Inferentia. As part of this initiative, AWS is committing up to $50 million in AWS Promotional Credits, training, and technical expertise across generative AI projects. Credit issuance determinations will be based on a variety of factors, including but not limited to the customer's experience developing new technology solutions, the maturity of the project idea, evidence of future solution adoption, and the customer's breadth of generative AI skills. The Impact Initiative is open to new or existing AWS Worldwide Public Sector customers and partners from enterprises worldwide who are building generative AI solutions to help solve society's most pressing challenges. Across the public sector, leaders are seeking to leverage generative AI to become more efficient and agile. However, public sector organisations face several challenges such as optimising resources, adapting to changing needs, improving patient care, personalising the education experience, and strengthening security. To respond to these challenges, AWS is committed to helping public sector organisations unlock the potential of generative AI and other cloud-based technologies to positively impact society. This global initiative is open from June 26, 2024 through June 30, 2026. Benefits and resources will include: Tailored training: Check out the latest trainings to help you achieve your go-to-market goals and business needs. Generative AI Innovation Center expertise: Ideate, identify, and implement generative AI solutions securely with guidance from the Generative AI Innovation Center. Technical support: Learn from AWS technical experts and tutorials to optimize your generative AI build with Well-Architected Framework. Networking and free builder sessions: Access to registration for no-cost AWS Summit events around the world. Global thought leadership opportunities: Showcase your AWS Generative AI Impact Initiative success stories with AWS marketing efforts. The AWS Public Sector Generative AI Impact Initiative builds on our ongoing commitment to the safe, secure, and responsible development of AI technology. For example, AWS is a contributing partner to the National Science Foundation's National Artificial Intelligence Research Resource (NAIRR), which provides U.S.-based researchers and educators access to advanced computing, datasets, models, software, training, and user support for AI research. Additionally, Amazon is a member of the U.S. Artificial Intelligence Safety Institute Consortium, established by the National Institute of Standards and Technology, and is contributing $5 million to enable the development of tools and methodologies that organisations can use to evaluate the safety of their foundation models. AWS is also committed to supporting initiatives like the AI for Changemakers Accelerator program, led by Tech To The Rescue, which includes AWS Chief Technology Officer Werner Vogels' Now Go Build CTO Fellowship program. To learn more about the Impact Initiative, visit https://aws.amazon.com/generative-ai-impact-initiative. See more stories here.
Amazon Web Services, Inc. has announced a $230 million commitment for startups around the world to accelerate the creation of generative AI applications. This will provide startups, especially early-stage companies, with AWS credits, mentorship, and education to further their use of artificial intelligence (AI) and machine learning (ML) technologies. Part of the new commitment will fund the second cohort of the AWS Generative AI Accelerator, a program that provides hands-on expertise and up to $1 million in credits to each of the top 80 early-stage startups that are using generative AI to solve complex challenges. Applications for the AWS Generative AI Accelerator open today and will be accepted until July 19. Startups can apply here. "For more than 18 years, AWS has helped more startups to build, launch, and scale their business than any other cloud provider - it's no coincidence that 96% of all AI/ML unicorns run on AWS," said Matt Wood, vice president, Artificial Intelligence Products at AWS. "With this new effort, we will help startups launch and scale world-class businesses, providing the building blocks they need to unleash new AI applications that will impact all facets of how the world learns, connects, and does business." Interested startups can learn more about how to access these funds here. Startups can use AWS credits to access AWS compute, storage, and database technologies, as well as AWS Trainium and AWS Inferentia2, energy-efficient AI chips that offer high performance at the lowest cost. These credits can also be used on Amazon SageMaker, a fully managed service that helps companies build and train their own FMs, as well as to access models and tools to easily and securely build generative AI applications through Amazon Bedrock. AWS Generative AI Accelerator continues to launch successful startups The AWS Generative AI Accelerator identifies top early-stage startups that are using generative AI to solve complex challenges in areas such as financial services, healthcare and life sciences, media and entertainment, business, and climate change, among others. Participants will access sessions on ML performance enhancement, stack optimization, and go-to-market strategies. The 10-week program will match participants with both business and technical mentors based on industry vertical. Startups will receive up to $1 million each in AWS credits to help them build, train, test, and launch their generative AI solutions. They will also have access to industry experts, technology, and technical sessions from NVIDIA, the program's presenting partner, and be invited to join the NVIDIA Inception program, designed to nurture cutting-edge startups. AWS will announce selected startups for the second cohort on September 10, and the program will kick off on October 1 with in-person sessions at Amazon's Seattle campus. All 80 participating startups will be invited to attend and showcase their solutions to potential investors, customers, partners, and AWS leaders in December at re:Invent 2024 in Las Vegas. "AWS has been instrumental in enabling us to scale our generative AI platform to meet the rapidly growing demand from our global user community. Their robust generative AI infrastructure helped us reduce inferencing costs by 60% and accelerate our language model inference speeds by up to 35%," said Jachin Bhasme, co-founder and COO of Leonardo.AI, a powerful suite of generative AI tools for creators and one of the 21 startups to participate in the first cohort of the program. "The accelerator was also an incredible experience for us. The business and technical mentorship we received and the connections we made played a crucial role in shaping our product and strategy." "AWS has the right combination of technology, network of partners, and potential customers in the life sciences space that made it the best fit to support our vision of building the next generation of AI models for drug discovery," said Nima Alidoust, Ph.D., CEO...
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Matthew McClean is a Machine Learning Technology Leader with the leading Amazon Web Services (AWS) cloud platform. He leads the customer engineering teams at Annapurna ML helping customers adopt AWS Trainium and Inferentia for their Gen AI workloads. Kamran Khan, Sr Technical Business Development Manager for AWS Inferentina/Trianium at AWS. He has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium. AWS Tranium and Inferentia // MLOps podcast #238 with Kamran Khan, BD, Annapurna ML and Matthew McClean, Annapurna Labs Lead Solution Architecture at AWS. Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/ // Abstract Unlock unparalleled performance and cost savings with AWS Trainium and Inferentia! These powerful AI accelerators offer MLOps community members enhanced availability, compute elasticity, and energy efficiency. Seamlessly integrate with PyTorch, JAX, and Hugging Face, and enjoy robust support from industry leaders like W&B, Anyscale, and Outerbounds. Perfectly compatible with AWS services like Amazon SageMaker, getting started has never been easier. Elevate your AI game with AWS Trainium and Inferentia! // Bio Kamran Khan Helping developers and users achieve their AI performance and cost goals for almost 2 decades. Matthew McClean Leads the Annapurna Labs Solution Architecture and Prototyping teams helping customers train and deploy their Generative AI models with AWS Trainium and AWS Inferentia // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links AWS Trainium: https://aws.amazon.com/machine-learning/trainium/ AWS Inferentia: https://aws.amazon.com/machine-learning/inferentia/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Kamran on LinkedIn: https://www.linkedin.com/in/kamranjk/ Connect with Matt on LinkedIn: https://www.linkedin.com/in/matthewmcclean/ Timestamps: [00:00] Matt's & Kamran's preferred coffee [00:53] Takeaways [01:57] Please like, share, leave a review, and subscribe to our MLOps channels! [02:22] AWS Trainium and Inferentia rundown [06:04] Inferentia vs GPUs: Comparison [11:20] Using Neuron for ML [15:54] Should Trainium and Inferentia go together? [18:15] ML Workflow Integration Overview [23:10] The Ec2 instance [24:55] Bedrock vs SageMaker [31:16] Shifting mindset toward open source in enterprise [35:50] Fine-tuning open-source models, reducing costs significantly [39:43] Model deployment cost can be reduced innovatively [43:49] Benefits of using Inferentia and Trainium [45:03] Wrap up
Pinecone has raised over $130 million and was most recently valued at $750 million. On this week's Unsupervised Learning, we sat down with CEO and Founder of Pinecone, Edo Liberty. Pinecone is arguably one of the most important elements in today's modern datastack. Edo shared with us the most common use cases of Pinecone, the evolving landscape of vector databases, challenges in building vector databases, the "painful" launch of serverless model, and what people get wrong the most about Pinecone. (0:00) intro(0:33) what was it like when ChatGBT came out?(6:29) Edo's favorite applications built on Pinecone(10:34) will we see more image and video applications in 2024?(14:58) best ways to deal with hallucinations(18:12) the evolving landscape of vector databases(20:27) if Edo had to build a product, what would his stack look like?(31:45) helping clients versus letting them figure things out(36:38) moving to a serverless model(40:33) what areas of AI should new startups target?(45:18) Amazon SageMaker(50:38) over-hyped/under-hyped(51:30) biggest surprises while building Pinecone(56:13) Jacob and Pat debrief With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
If you're a fast-growing company needing to leverage all that Generative AI has to offer, the last thing you want is a vendor lock-in. Or, finding out the AI solution you thought you needed doesn't actually have the juice you need. One solution? Amazon Web Services (AWS). We sit down and talk all things AI models with Shruti Koparkar, Product Marketing Lead, AI/ML Acceleration at AWS. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Shruti questions on GenAI and AWSRelated Episodes:Ep 238: WWT's Jim Kavanaugh Gives GenAI Blueprint for BusinessesEp 232: Creating and Capturing Business Value with GenAI – Insights From HPEUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:01:20 About Shruti Koparkar and AWS02:50 More on AWS and Generative AI09:54 X-rated computing used to fine-tune models. 12:46 Adobe focused on AI tools, using AWS.17:50 AI evolution and AWS preparedness for influx.21:01 Experienced ARM engineer excited about NVIDIA's potential.24:08 Identify use cases for small-scale applications.Topics Covered in This Episode:1. AWS and its Role in Generative AI2. AWS and Foundation Models2. AWS's Involvement with Companies Implementing Generative AI4. Future Preparations of AWS for Generative AI DevelopmentsKeywords:generative AI, industry insider, Everyday AI, podcast, livestream, newsletter, AWS, Shruti Koparkar, product marketing, x rated computing, NVIDIA GPUs, Amazon Web Services, cloud computing, foundation models, Amazon Bedrock, API, Amazon SageMaker, customizing models, data security, ChatGPT, Perplexity, Amazon Code Whisperer, Adobe, ARM, Leonardo dot ai, apps, NVIDIA GTC conference, accelerated computing Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
On this episode of The AI Moment, we discuss a generative AI case study– Amazon Pharmacy. Gen AI has overrun the planet in concept, but the fact is the practical use of generative AI is nascent. Amazon's Pharmacy case study is an important one that reveals lessons enterprises should learn and a blueprint they should follow in their journeys to unlocking the value of generative AI.
This episode of Software Engineering Daily is part of our on-site coverage of AWS re:Invent 2023, which took place from November 27th through December 1st in Las Vegas. In today's interview, host Jordi Mon Companys speaks with Ankur Mehrotra who is the Director and GM of Amazon SageMaker. Jordi Mon Companys is a product manager The post AWS re:Invent Special: Sagemaker with Ankur Mehrotra appeared first on Software Engineering Daily.
This episode of Software Engineering Daily is part of our on-site coverage of AWS re:Invent 2023, which took place from November 27th through December 1st in Las Vegas. In today's interview, host Jordi Mon Companys speaks with Ankur Mehrotra who is the Director and GM of Amazon SageMaker. Jordi Mon Companys is a product manager The post AWS re:Invent Special: Sagemaker with Ankur Mehrotra appeared first on Software Engineering Daily.
Welcome to this week's edition of “MI&S Datacenter Podcast” I'm Patrick Moorhead with Moor Insights & Strategy, and I am joined by co-hosts Matt, Will, and Paul. We analyze the week's top datacenter and datacenter edge news. We talk compute, cloud, security, storage, networking, operations, data management, AI, and more! Better Bedrock Building https://press.aboutamazon.com/2023/11/aws-announces-more-model-choice-and-powerful-new-capabilities-in-amazon-bedrock-to-securely-build-and-scale-generative-ai-applications HPE's Impressive Intelligent Edge Performance https://x.com/WillTownTech/status/1729883805442605382?s=20 Thumbs Up - AWS Innovation Center https://aws.amazon.com/blogs/machine-learning/introducing-the-aws-generative-ai-innovation-centers-custom-model-program-for-anthropic-claude/ AWS Speaks to Observability https://x.com/WillTownTech/status/1729211188066820455?s=20 New Skills for AWS Sagemaker https://press.aboutamazon.com/2023/11/aws-announces-five-new-amazon-sagemaker-capabilities-for-scaling-with-models Cisco Partner Summit Insights https://x.com/WillTownTech/status/1724434889464828278?s=20 AWS New Home Grown Silicon - Graviton4 and Trainium2 Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
Sysdig's Alessandro Brucato and Michael Clark join Dave to discuss their work on "AWS's Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation." Attackers are targeting what are typically considered secure AWS services, like AWS Fargate and Amazon SageMaker. This means that defenders generally aren't as concerned with their security from end-to-end. The research states "The AMBERSQUID operation was able to exploit cloud services without triggering the AWS requirement for approval of more resources, as would be the case if they only spammed EC2 instances." This poses additional challenges targeting multiple services since it requires finding and killing all miners in each exploited service. The research can be found here: AWS's Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation Learn more about your ad choices. Visit megaphone.fm/adchoices
Sysdig's Alessandro Brucato and Michael Clark join Dave to discuss their work on "AWS's Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation." Attackers are targeting what are typically considered secure AWS services, like AWS Fargate and Amazon SageMaker. This means that defenders generally aren't as concerned with their security from end-to-end. The research states "The AMBERSQUID operation was able to exploit cloud services without triggering the AWS requirement for approval of more resources, as would be the case if they only spammed EC2 instances." This poses additional challenges targeting multiple services since it requires finding and killing all miners in each exploited service. The research can be found here: AWS's Hidden Threat: AMBERSQUID Cloud-Native Cryptojacking Operation Learn more about your ad choices. Visit megaphone.fm/adchoices
What's folks, today I'm pumped to be joined by Michael Katz, CEO and co-founder at mParticle, the leading independent customer data platform.Summary: In the contentious debate over Packaged and Composable CDPs, Michael delivers a clear-eyed perspective that cuts through the hype. Rejecting the idea that Pacakged CDPs are becoming obsolete, he emphasizes the continued importance of data quality, integrity, and privacy, and he warns against becoming entangled in marketing illusions. He also highlights the need for adaptability, dismissing some of the more pervasive myths in the martech landscape, such as the magic of zero copy data. With strategic acquisitions, mParticle is focusing on intelligence and automation, aiming to be more than just “simple pipes” in data management. Michael's insights provide a grounded roadmap, focusing on genuine value creation and thoughtful navigation of the complex industry that is Customer Data Platforms.About Michael Michael got his start as an analyst at Accenture and later focused on customer acquisition and marketing strategy for a mobile content company He entered the entrepreneurial world founding interclick in 2005, a data-valuation platform for advertisers He ran the company as President and took the company public in 2009 and sold to Yahoo in 2011 for $270M He's been on the Board of Directors for several companies including Adaptly and BrightLine He's a volunteer at Southampton Animal Shelter He's also a Mentor at Techstars After a year as VP of Optimization and Analytics at Yahoo after his company's acquisition, Michael took on his second venture, co-founding mParticle in 2013 mParticle is a global, remote-first company that provides a real-time AI customer data platform. They help get the highest quality customer data to any system that marketers or product managers use – ultimately improving customer experiences. They work with big players and small, fueling the customer success of brands like Paypal, Seatgeek, Venmo, Headspace, Lyft, McDonalds, and Airbnb. Unpacking the 8 Components of Customer Data PlatformsWhen asked about Arpit Choudhury's enumeration of the eight essential components of Customer Data Platforms (CDPs), Michael's response was swift and assertive. With an appreciative shoutout to Arpit for articulating the complex aspects of CDPs, he aligned himself with the eight facets laid out in the question.These eight components, according to Michael, indeed compose an end-to-end solution for the first generation of CDPs. They include: CDI, customer data infra, collect 1st party event data from customers from website and apps ETL, data ingestion, extract data from other tools and load it into DWH Data Storage/warehousing, store a copy of data collected Identity resolution, a solution for tying together a customer's various interactions with you across multiple platforms and devices Audience segmentation, drag and drop UI Reverse ETL, extract/activate from DWH to other tools Data quality, validity, accuracy, consistency, freshness, completeness… Data governance and privacy compliance, user consent, HIPAA compliance Emphasizing the integrated nature of these components, Michael asserts that the value of the whole system is greater than the sum of the individual parts. He proudly reflects on mParticle's reputation as a complete CDP and emphasizes that many existing CDPs lack strong stories around data quality and governance.The conversation with Michael reveals his confidence in the synergy that arises when these parts function together. He cautions against skipping any of these steps, underscoring that a weak foundation will undermine the entire system. Speed in data processing should not compromise quality and privacy protection, and mParticle's holistic approach ensures this balance is maintained.Takeaway: Michael's insights into the eight essential components of CDPs not only align with industry experts but also highlight the importance of a unified approach. By valuing integration, quality, and consumer privacy, mParticle positions itself as a leading player in the CDP landscape. The wisdom shared by Michael emphasizes that genuine value is derived not merely from the individual elements but from the careful orchestration of all parts into a coherent and resilient system.Debunking the Myths Around Reverse ETL and Composable CDPsReverse ETL and composable CDP proponents assert that the traditional CDP is becoming obsolete and that the future lies in Composable CDPs that leverage modern data warehouses and processes like Reverse ETL. Claiming that existing CDP vendors will have to adapt to this shift or risk becoming irrelevant.Michael's written extensively about this debate over the years. He argued that product marketing around the composable CDP is just modern day sleight of hand tricks…designed to dupe the buyer. To be fair, mParticle has adapted to the rise of the modern data stack by offering services like data warehouse sync and value-based pricing. Michael highlighted the rise of the Cloud Data Warehouse as an essential system within organizations, but he was quick to emphasize that the real challenges lie in maintaining data quality, integrity, and privacy. As he elaborated, legacy CDP vendors like mParticle deliver value not in the storage of data, but in the movement and activation of it. Michael stressed the importance of going beyond mere data collection to understanding the context and the “why” behind customer behavior.According to Michael, the true value in the CDP space has shifted towards enhancing context, improving understanding, and introducing an insights layer. For mParticle, this has translated into a focus on finding truth and meaning in their data, creating an infinitely optimizing loop. He vehemently argued against reverse ETL, characterizing it as “garbage in, garbage out,” and took aim at what he described as “sleight of hand” tricks in product marketing designed to distract from the real issues.Michael challenged several narratives in the debate, dismissing the importance of zero data copy, the vulnerability of CDPs to security threats, and the notion of faster deployment times leading to sustained value. He warned against getting enticed by aggressive product marketing, stressing that what might appear easy to implement could be hard to maintain.Takeaway: The transformation of CDPs isn't just about new technologies or marketing tactics but lies in understanding the true needs of customers. With a focus on integrity, context, and sustained value, Michael exposes the fallacies in current debates, emphasizing that real success comes from creating genuine value, not just noise.The Realities of Replacing Traditional CDPs with Reverse ETL ToolsWhen asked about the growing trend where some reverse ETL customers have found ways to replace their traditional Customer Data Platforms (CDP) with reverse ETL tools, Michael acknowledged that this represents only a very narrow subsegment of the market. He expressed a concern that the fragmented “Do It Yourself” approach isn't always a practical solution, particularly for most businesses within the enterprise sector.Michael pointed out that during the pandemic, certain habits had developed, often driven by data engineers working with limited perspectives and without a comprehensive understanding of the complexities of running successful digital marketing campaigns. This lack of integration and understanding has led to an increasing need for a return of the decision-making power to the marketers.Highlighting the importance of usability, Michael described how mParticle is designed to make it easy for marketers to contextualize and activate data in a low code, no code manner. This approach stands in contrast to other CDPs and modern data stack tools that require intricate knowledge of SQL scripts and schema. A significant portion of his argument revolved around the practical challenges of troubleshooting across multiple different systems. He explained that when a business relies on eight or more different systems to serve the purpose of an end-to-end CDP, it introduces a unique set of complexities. If something goes wrong, troubleshooting becomes an intricate web of challenges involving different account managers. In Michael's words, “the whole thing becomes a bit of a mess.”Takeaway: Michael's insight sheds light on the realities of replacing a traditional CDP with reverse ETL tools. The fragmented approach may work for some but presents complexities and challenges that might be impractical for the broader market. Usability, integration, and streamlined workflows are highlighted as essential elements for optimizing business value, suggesting that while there are different paths to success, a straight line is often the fastest and most efficient route. The emphasis on integration over “hobbyist” solutions presents a compelling argument for businesses looking to evolve in the ever-changing landscape of martech.Debunking the Myth of Zero Copy Data in MartechWhen Michael was asked about the notion of zero copy data, he didn't mince words, immediately cutting through the hype to lay bare the underlying realities. He expressed skepticism about the idea that zero copy data is a magical solution, pointing to the assumption that copying data creates inefficiency and additional access cost.Michael argued that the cost of storage isn't the main driver of expenses; it's the cost of compute. He believes that creating duplicate copies of data doesn't drastically change costs and, moreover, that there's considerable efficiency to be gained by replicating data for different uses and use cases.He also emphasized the importance of focusing on the value side of the equation. Minimizing costs is essential to maximizing investable resources for growth, but it shouldn't overshadow the primary goal of driving customer value. Michael expressed concern that focusing on zero copy data might lead businesses down the wrong path, solving for a non-existent problem.His perspective on the issue extended to a critique of some reverse ETL companies. He noted that they often face a churn problem, luring customers in with the promise of an “easy button” only to disappoint when reality doesn't meet expectations.Takeaway: Michael's dismantling of the zero copy data concept offers a vital reminder that not all that glitters is gold in the world of martech. By focusing on the practicalities of costs and the importance of efficiency and value, he encourages businesses to ask the right questions and prioritize what truly matters. His argument against zero copy data serves as a caution against getting swept up in appealing but potentially misguided solutions, emphasizing instead a thoughtful approach to data management that delivers real value.Examining the Warehouse Native Approach to MartechWhen Michael was asked about the increasing trend of warehouse native approaches in martech and its potential impact on companies with large volumes of non-revenue-generating users, his response was insightful. He broke down the question into specific elements, focusing on both the technological and practical aspects of this approach.He acknowledged the structure of a typical marketing tech stack, with various components like analytics, customer engagement platforms, experimentation tools, and customer support services. However, he questioned the real beneficiaries of having all these tools built natively on the Cloud Data Warehouse. He emphasized that the benefit might lie more with the data warehouse provider than with the customer.Michael also pointed out that as different vendors leverage multiple datasets and run their own compute cycles on the data warehouse, it's not necessarily clear if that would result in cost savings. He challenged the assumption that avoiding multiple copies of data would inherently save money, stating that there hasn't been enough side-by-side comparison to substantiate this belief.He concluded that whether it's through a company like Snowflake or mParticle, they are, in essence, reselling cloud compute in different forms. Simply assuming cost savings because of a lack of data duplication might not hold true in practical terms.Takeaway: Michael's analysis of the warehouse native approach in martech opens a nuanced conversation about the real-world implications of this trend. By examining who benefits from this strategy and challenging the common assumption that it leads to cost savings, he encourages a more critical evaluation. The discussion underscores that what might appear as an intuitive solution needs more robust evidence and careful consideration to understand its true value and impact.The Insights Layer of mParticle's Approach to Customer DataIt's getting harder and harder to track the packaged vs composable battle these days, there's a ton of overlap with so many tools: ETL tools adding rETL features while rETL tools and CDIs becoming composable CDPs CDPs adding product analytics and AI features while product analytic tools adding CDP and AI features CDPs adding marketing automation features while MAPs adding CDP features CDPs also adding “warehouse connectors” or “warehouse sync” Adding an interesting layer to the debate here is extending the capabilities of the CDP into new areas. mParticle made some interesting acquisitions over the last few years: Aug 2022 Vidora, AI personalization platform for customer data Jan 2022 Indicative, a customer journey analytics platform to address data entropy With these capabilities, mParticle is adding an intelligence layer that not many CDPs have. Not only are they capturing and helping customers move data around, they're helping them make sense of the data, look back to see what happened and also make predictions on what will happen.Initially, mParticle's efforts were directed at solving mobile data collection challenges, aiming to set up organizations on a durable and scalable API-based system. By addressing unique mobile data challenges that no one else was confronting, they sought to position themselves at the center of mass for many consumer brands.According to Michael, the solution to these challenges led to mParticle's focus on multi-channel data challenges, revolving around vital components like data quality, governance, and identity resolution. Identity resolution, Michael believes, remains one of the most misunderstood aspects of the whole process.But the vision didn't stop there. The evolution went beyond these challenges, aiming at what would come next: intelligence and automation. The acquisitions of Vidora and Indicative, as Michael revealed, probably accelerated mParticle's roadmap by four or five years.Michael brought to light mParticle's ambitious strategy to move beyond mere segmentation tools and “simple pipes.” As Michael argued, many existing tools are like “simple pipes” that do exactly what you tell them to do. However, mParticle's approach aims to be an intelligent force that moves the industry forward.Michael's discourse paints a picture of a company that's not just satisfied with optimizing first-generation capabilities. It's a story of looking ahead, focusing on intelligent pipes and striving to put customers in the best possible position to extract value from their first-party customer data.Takeaway: By focusing on next-generation capabilities and accelerating their roadmap through strategic acquisitions, mParticle is positioning itself as a leading force in the evolving landscape of martech. The compelling insight is their move towards intelligent pipes that can make sense of the data, not just move it around, guiding the industry into a new era of customer data understanding and utilization.The Vidora Acquisition: Empowering Marketers with Machine LearningWhen asked about the acquisition of Vidora and its integration into mParticle's CDP offering, Michael dove into the compelling dynamics behind this strategic move. The conversation revolved around AI tools like IBM's Watson Studio, Amazon SageMaker, and Google's AutoML, which are generally built for data scientists. What set Vidora apart, however, was its design to be accessible to knowledge workers and marketers, aligning with the founders' vision to democratize machine learning.Michael was keen to clarify that many tools in the market offer a single type of machine learning, often centered around propensity scores. But Vidora went beyond, impressing him with the building of diverse ML pipelines. The suite enabled regression testing, propensity scoring, uplift analysis, and more, without constraining the types of intelligence or automation that customers could access.According to Michael, the uniqueness of customer data demands tailored solutions, as no two customers' data look, shape, or behave the same way. With Vidora, now branded as Cortex, mParticle has extended a full suite to users that align with various channels. The seamless integration of models within mParticle allows marketers to create, sync, and activate models effortlessly, accommodating different channels from paid advertising to customer support.But what really resonated with Michael's view was how this acquisition tackled a common industry problem: the gap between the creation of cool models and their actual implementation into production. Most in-house models never see the light of day, and those that do are often channel-specific, failing to transcend their original context. Cortex, on the other hand, offers flexibility without channel dependency, backed by mParticle's robust and diverse set of connectors.Takeaway: mParticle's acquisition of Vidora, rebranded as Cortex, has redefined the machine learning landscape for marketers. It provides a versatile and accessible set of tools that break down conventional barriers and facilitate the practical application of models across diverse channels. By doing so, it empowers marketers to extract greater value from data and paves the way for a more intelligent and integrated approach to customer engagement.Innovating Martech Pricing: A Fresh Approach to Value-Based PricingWhen asked about the recent shift in mParticle's pricing structure, Michael delves into the exciting philosophy behind this change. He emphatically expresses that the change isn't merely superficial, but rather a product of innovation, something that's more than just a re-packaging of their pricing model. Michael explains the need for de-averaging or de-aggregating pricing, acknowledging that the traditional charging based on users or events is fairly straightforward, but it doesn't capture the full picture. According to Michael, not all events, users, or use cases hold equal value, and treating them as such creates a logjam through the system. This one-size-fits-all approach undermines the ability to provide marketers with appropriate solutions.The heart of the problem is that this logjam prevents Customer Data Platforms (CDP) from having access to all necessary data, typically due to how they are priced. Michael highlights that when they analyzed how customers were using mParticle, they discovered three distinct use cases: real-time event federation, data maintenance for historical lookup and redundancy, and targeting and personalization.With this fresh approach, mParticle managed to “unclog the pipes” of data, allowing it to flow where needed and at the right pace. This shift allowed for acceleration in audience calculation and refresh, and extended the look-back window on real-time audiences from a mere 90 days to perpetuity without sacrificing performance.Takeaway: Michael's insights into mParticle's new pricing structure reveal an innovative and necessary departure from traditional user or event-based pricing. By recognizing the unique value in different data points and use cases, mParticle has managed to not only create a more effective pricing model but also to enhance the functionality and efficiency of their platform. It's a lesson in understanding the complex dynamics of the martech space and the importance of aligning pricing models with actual value and functionality.Empowering Black Founders with TechnologyOne of the coolest discoveries when digging through Michael's socials is that he actually created Tech for Black Founders. He got together with a list of data vendors to provide free software to early-stage startups led by Black founders, as part of an initiative to support Black technologists and entrepreneurs, who currently make up only 1% of founders backed by venture capital in the US. In the midst of 2020, during a peak of social unrest, he found himself pondering how his company, mParticle, could serve the community better. It was more than a fleeting thought; it was a shower epiphany that would soon spark a wave of empowerment for black technologists and entrepreneurs.Michael's initiative, which might seem simple, was profound. Recognizing that black founders made up less than 1% of those backed by venture capital in the U.S., he set out to make a difference. The idea was to provide free software from leading tech companies to early-stage, black-led start-ups. The aim was to bridge the equity gap, offering services usually costing six to seven figures to those underrepresented.He texted friends and fellow founders from braze, amplitude, branch, and more. His proposal was met with instant approval, and a simple application page was launched. What happened next was nothing short of extraordinary. The initiative went viral, with 50 to 100 companies reaching out, eager to contribute, and the movement continues to grow, now encompassing hundreds of companies offering their services to black and other minority tech founders.Takeaway: Michael's leadership in rallying tech companies to offer free software to black and minority tech founders is a powerful example of how one person's idea can ignite a movement. It underscores the importance of community and collaboration, and showcases a tangible effort to close the equity gap in the tech industry. Simple, immediate, and impactful, it's a testament to what can be achieved when passion meets purpose.Finding Balance and Joy in a Multifaceted LifeWhen asked about how he remains happy and successful amidst his diverse roles as a founder, writer, sports fanatic, dad, animal shelter volunteer, mentor, and board member, Michael's response is a reflection of self-awareness, clarity, and wisdom. His take on balancing a life filled with various passions and responsibilities is both refreshing and deeply inspiring.First and foremost, Michael's priority is being a dad, a role he deems his most important job. Everything else, whether it's being a CEO or a board member, follows in sequence. He admits that although he doesn't always follow his own advice, the goal isn't merely about becoming proficient at navigating the ups and downs of company building and the entrepreneurial journey. Instead, it's about transcending these fluctuations and reaching a state of equanimity.Michael stresses that the pursuit isn't happiness itself; rather, the pursuit is happiness. Finding joy, meaning, and growth in whatever he's doing is what keeps him motivated and content. He measures his alignment with his work by his excitement every Monday morning and his anxiety every Friday for not getting enough done. If those feelings begin to reverse, that's his cue to reassess his path.Takeaway: Michael's philosophy on balance and happiness is a profound lesson in understanding one's priorities and embracing the journey itself as the source of joy. His words are a reminder to find contentment in the pursuit, to align passions with purpose, and to recognize the importance of self-awareness in living a fulfilling life. His perspective turns the conventional wisdom of “work-life balance” on its head, offering a unique insight into living a life filled with meaning and happiness.Michael Teases Exciting Announcements from mParticleWhen asked if there was anything he wanted to share with the audience or any exciting things launching soon, Michael's response was filled with enthusiasm and intrigue. He hinted at some compelling announcements coming from mParticle in September. Without divulging specific details, he provided a glimpse into what the company is focusing on.Michael mentioned that these new developments would continue to expand on their mission of creating value. They are looking to transpose their services and add value not just in their own data store but across any data store, including the data warehouse ecosystem. Though he kept the specifics under wraps, the anticipation in his voice was clear. The audience was left eagerly awaiting the “cool stuff” that mParticle has in store.Episode RecapThe martech industry is no stranger to bold claims and sweeping predictions, and the recent debate around Reverse ETL and Composable CDPs is no exception. The air is thick with assertions that traditional CDPs are going the way of the dinosaur, set to be replaced by sleek, modern solutions. Michael, however, has a more grounded take.For starters, he considers the buzz around Composable CDPs to be a well-executed marketing illusion, a sleight of hand rather than a genuine revolution. Sure, modern data warehouses and Reverse ETL processes are capturing attention, but at the core, the need for data quality, integrity, and privacy still reigns supreme. Michael doesn't view this shift as a death blow to existing CDP vendors like mParticle, but rather a call to adapt, focusing on the movement and activation of data.Adaptation is a theme that resonates throughout Michael's insights. While acknowledging that some Reverse ETL customers are indeed replacing traditional Customer Data Platforms, he emphasizes that this trend represents a narrow slice of the market. The fragmented “Do It Yourself” approach has its limitations, especially when applied to the complex landscape of enterprise-level marketing. Here, mParticle's approach stands out, prioritizing usability and enabling marketers to contextualize and activate data without becoming entangled in intricate coding.Michael doesn't shy away from debunking popular narratives in the debate, including the myth of zero copy data. Cutting through the hype, he directs attention to the real drivers of expenses and underscores the importance of focusing on customer value over cost-cutting.Perhaps the most intriguing aspect of Michael's perspective lies in the strategic evolution of mParticle. The company's recent acquisitions, including Vidora, an AI personalization platform, signal a commitment to intelligence and automation. Moving beyond simple data collection and segmentation, mParticle aims to become an intelligent force that drives the industry forward. Their tools aren't mere “simple pipes”; they're designed to meet the unique needs of customers and provide tailored solutions that enhance understanding and value extraction.All and all, Michael offers a refreshingly realistic and actionable perspective on the current CDP landscape. Rather than getting caught up in marketing tricks or chasing after the latest shiny object, he encourages a return to core principles and a commitment to intelligent, adaptable solutions. It's an approach that recognizes the complexity of the industry while providing clear pathways for growth, innovation, and genuine value creation.Whether you're a marketer, data engineer, or business leader, listen below for insights that offer a solid foundation for navigating the ever-complex world of martech and data platforms, without falling prey to illusions or unnecessary complexity.✌️--Intro music by Wowa via UnminusCover art created with Midjourney
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this informative episode, we delve into the significant update in Salesforce's Einstein Studio, highlighting its new capability to integrate custom models, starting with Amazon SageMaker. Learn from experts about the nuances of this integration, potential use-cases, and how it might revolutionize your CRM strategies by leveraging AI and machine learning. Get on the AI Box Waitlist: https://AIBox.ai/ Investor Contact Email: jaeden@aibox.ai Facebook Community: https://www.facebook.com/groups/739308654562189/ Discord Community: https://aibox.ai/discord Download Selfpause: https://selfpause.com/Podcast Follow me on Twitter... er... X.com: https://twitter.com/jaeden_ai
AWS Morning Brief for the week of July 31, 2023, with Corey Quinn. Links: The new Amazon Chime 5 on Windows, macOS, and web is coming soon - Amazon Chime Help Center Access and Query are now generally available for Amazon Managed Blockchain AWS Lambda adds support for Python 3.11 AWS Entity Resolution: Match and Link Related Records from Multiple Applications and Data Stores New – Amazon EC2 P5 Instances Powered by NVIDIA H100 Tensor Core GPUs for Accelerating Generative AI and HPC Applications New – AWS Public IPv4 Address Charge + Public IP Insights Preview – Enable Foundation Models to Complete Tasks With Agents for Amazon Bedrock Migrating AWS Lambda functions from the Go1.x runtime to the custom runtime on Amazon Linux 2 Introducing Smithy for Python Introducing AWS HealthScribe – automatically generate clinical notes from patient-clinician conversations using AWS HealthScribe Analyze rodent infestation using Amazon SageMaker geospatial capabilities AWS Reaffirms its Commitment to Responsible Generative AI Amazon SageMaker Canvas announces SOME THINGS I AM NOT GOING TO TELL YOU ABOUT
En este episodio, Antonio Rodriguez, Senior AI/ML Specialist en AWS, nos muestra el poder creativo de la IA Generativa y su potencial al alcance de todos. Prepárate para un viaje fascinante hacia el futuro de la inteligencia artificial en nuestro nuevo episodio.Este es el episodio 11 de la cuarta temporada del podcast de Charlas Técnicas de AWS.
Amazon SageMaker Multi-Model Endpoint (MME) is fully managed capability of SageMaker Inference that allows customers to deploy thousands of models on a single endpoint and save costs by sharing instances on which the endpoints run across all the models. Until recently, MME was only supported for machine learning (ML) models which run on CPU instances. Now, customers can use MME to deploy thousands of ML models on GPU based instances as well, and potentially save costs by 90%. MME dynamically loads and unloads models from GPU memory based on incoming traffic to the endpoint. Customers save cost with MME as the GPU instances are shared by thousands of models. Customers can run ML models from multiple ML frameworks including PyTorch, TensorFlow, XGBoost, and ONNX. Customers can get started by using the NVIDIA Triton™ Inference Server and deploy models on SageMaker's GPU instances in “multi-model“ mode. Once the MME is created, customers specify the ML model from which they want to obtain inference while invoking the endpoint. Multi Model Endpoints for GPU is available in all AWS regions where Amazon SageMaker is available. To learn more checkout: Our launch blog: https://go.aws/3NwtJyh Amazon SageMaker website: https://go.aws/44uCdNr
A HDD végének kezdete IT Business 2023-06-16 06:05:07 Mobiltech Megjelent egy játék, amely a minimális (!) konfigurációban is SSD-t követel meg. Miután kb. egy éve az új laptopokba szinte kizárólag SSD-t szerelnek, a dolog várható volt. Még mindig ragaszkodsz ahhoz az óriási, 8 TB-os pörgős HDD-hez a PC-s játéktelepítésekhez? Nos, kezd úgy tűnni, hogy az SSD-k lesznek a modern PC-s játékok új minimum Új módszer a gyorshajtás ellen: élvezhetetlen lesz a zene a kocsiban, ha túl gyorsan mész PCWorld 2023-06-16 09:12:00 Infotech Finnország Gyorshajtás A nagy streaming szolgáltatókat is meggyőzné a gyorshajtás rendhagyó büntetéséről egy finn vállalat. Uniós biztos: indokolt egyes tagállamok döntése Huawei és a ZTE hálózatainak kitiltásáról Mínuszos 2023-06-16 04:33:53 Mobiltech Kína Európai Unió Huawei Kitiltások Uniós biztos Indokolt és összhangban van az uniós szabályokkal az Európai Unió (EU) egyes tagállamainak azon döntése, hogy kitiltják a Huawei és a ZTE kínai telekommunikációs technológiákat a távközlési hálózatokból. Thierry Breton belső piacért felelős uniós biztos az elmúlt hetekben aggodalmának adott hangot amiatt, hogy túl kevés EU-tagország tiltotta ki a k Az idei zuhanás után a tabletek lent ragadnak, a PC-k viszont kapaszkodnak Bitport 2023-06-16 07:55:00 Mobiltech A személyi számítástechnikai eszközök eladásai az elkövetkező két évben sokat erősödhetnek, de a növekedés szinte kizárólak a PC-knek lesz betudható. Új mobilapp a Rossmann-nál Digital Hungary 2023-06-16 12:00:00 Mobiltech Telefon Új szintre emelte a felhasználók online vásárlási élményét a Rossmann: dizájnjában és tartalmában is megújult a drogériahálózat mobilapplikációja. A változásokkal javult a felhasználói élmény, a vásárlók az eddiginél egyszerűbben és gyorsabban elérhetik az összes funkciót és megrendelhetik a termékeket néhány kattintással a telefonjuk segítségével. A fotelünkben ülve repülhetjük körbe a legnagyobb budapesti nevezetességeket Rakéta 2023-06-16 14:12:03 Infotech Google Szent István GPS Országház A Google Maps újításának köszönhetően egészen új szemszögből csodálhatjuk meg a leghíresebb fővárosi épületeket és nevezetességeket. Az Immersive View (Berepülő nézet) nevű funkció segítségével virtuálisan körberepülhetjük többek között a Szent István Bazilikát, az Országházat, vagy a Halászbástyát is. Hatalmas kőtárgyra bukkantak Egyiptomban, az ősi birodalomról alkotott képünket alaposan átírja a páratlan lelet Promotions 2023-06-16 13:13:00 Tudomány Egyiptom Egyiptom titkait még mindig nem sikerült teljesen feltárni, a mostani felfedezés lehet a kulcsa egykori hatalmuknak. Bemutatkozott a ChatGPT kihívója, a Falcon40B Márkamonitor 2023-06-16 08:36:06 Infotech Mesterséges intelligencia ChatGPT AWS Az új mesterséges intelligencia alapú nyelvi modell tudása 1 billió lexikai elemre épül. Nyilvánosan is bemutatkozott és széles körben elérhetővé vált a világ első számú nyílt forráskódú nagy nyelvi modellje (Llm). A 40 milliárd paraméteres kapacitással rendelkező és az Amazon Web Services felhőalapú gépi tanulási platformja, az Amazon SageMaker-e Nem a harcrendszeren segített az id Software a Starfield fejlesztésében theGeek 2023-06-16 14:32:25 Gaming A Bethesda elárulta, hogy miben kapott támogatást a Doom és Wolfenstein alkotóitól: az id Software fejlesztőitől a sci-fi RPG-jéhez. A Starfield egyike a legjobban várt játékoknak 2023-ban, amely egy hatalmas űrkalandot ígér a Bethesda Game Studios-tól. A Skyrim és a Fallout sorozat mögött álló stúdió nem volt rest segítséget kérni más fejlesztőktő Most figyelj, ha nincs végtelen mobilneted First Class 2023-06-16 08:06:51 Mobiltech Mobilinternet Nagyon hasznos funkcióval jelentkezik az egyik zeneszolgáltató platform. Akiknek limitált a mobiladatuk, azok most örülni fognak. Egymillió eurós tőkeinjekciót kapott a Munkahelyeink.hu karrierportál Digital Hungary 2023-06-16 08:32:00 Gazdaság Beruházás Mesterséges intelligencia Egymillió euró értékű befektetést kapott a Munkahelyeink.hu munkavállalói és munkáltatói portált is üzemeltető nemzetközi WhereWeWork.com karrierportál hálózat. A vállalat a beruházást mesterséges intelligencia és gépi tanulási algoritmusainak továbbfejlesztésére, valamint újabb piacokon történő terjeszkedésre fogja fordítani. Hamarosan ismét útnak indíthatják a Starshipet IT Business 2023-06-16 07:09:09 Mobiltech Világűr SpaceX A SpaceX elnök-vezérigazgatójának közlése szerint másfél-két hónapon belül ismét megkísérlik útjára indítani a Csillaghajó második prototípustát, a Ship 25-öt. Mint arról korábban beszámoltunk, egy sikertelen indítás után a SpaceX április 20-án indította útnak a világ legnagyobb űrrakétáját, a Starshipet. Ugyan az indítás jól sikerült, a rakéta vég A technológusok az MI fejlesztésén, a tanácsadók meg az eladásán pörögnek Bitport 2023-06-16 09:49:00 Infotech Mesterséges intelligencia A McKinsey is előállt a maga teóriájával, hogy hol és mennyit hoz majd a világgazdaságnak a mesterséges intelligencia. Kevés szektor kaszál sokat.
A HDD végének kezdete IT Business 2023-06-16 06:05:07 Mobiltech Megjelent egy játék, amely a minimális (!) konfigurációban is SSD-t követel meg. Miután kb. egy éve az új laptopokba szinte kizárólag SSD-t szerelnek, a dolog várható volt. Még mindig ragaszkodsz ahhoz az óriási, 8 TB-os pörgős HDD-hez a PC-s játéktelepítésekhez? Nos, kezd úgy tűnni, hogy az SSD-k lesznek a modern PC-s játékok új minimum Új módszer a gyorshajtás ellen: élvezhetetlen lesz a zene a kocsiban, ha túl gyorsan mész PCWorld 2023-06-16 09:12:00 Infotech Finnország Gyorshajtás A nagy streaming szolgáltatókat is meggyőzné a gyorshajtás rendhagyó büntetéséről egy finn vállalat. Uniós biztos: indokolt egyes tagállamok döntése Huawei és a ZTE hálózatainak kitiltásáról Mínuszos 2023-06-16 04:33:53 Mobiltech Kína Európai Unió Huawei Kitiltások Uniós biztos Indokolt és összhangban van az uniós szabályokkal az Európai Unió (EU) egyes tagállamainak azon döntése, hogy kitiltják a Huawei és a ZTE kínai telekommunikációs technológiákat a távközlési hálózatokból. Thierry Breton belső piacért felelős uniós biztos az elmúlt hetekben aggodalmának adott hangot amiatt, hogy túl kevés EU-tagország tiltotta ki a k Az idei zuhanás után a tabletek lent ragadnak, a PC-k viszont kapaszkodnak Bitport 2023-06-16 07:55:00 Mobiltech A személyi számítástechnikai eszközök eladásai az elkövetkező két évben sokat erősödhetnek, de a növekedés szinte kizárólak a PC-knek lesz betudható. Új mobilapp a Rossmann-nál Digital Hungary 2023-06-16 12:00:00 Mobiltech Telefon Új szintre emelte a felhasználók online vásárlási élményét a Rossmann: dizájnjában és tartalmában is megújult a drogériahálózat mobilapplikációja. A változásokkal javult a felhasználói élmény, a vásárlók az eddiginél egyszerűbben és gyorsabban elérhetik az összes funkciót és megrendelhetik a termékeket néhány kattintással a telefonjuk segítségével. A fotelünkben ülve repülhetjük körbe a legnagyobb budapesti nevezetességeket Rakéta 2023-06-16 14:12:03 Infotech Google Szent István GPS Országház A Google Maps újításának köszönhetően egészen új szemszögből csodálhatjuk meg a leghíresebb fővárosi épületeket és nevezetességeket. Az Immersive View (Berepülő nézet) nevű funkció segítségével virtuálisan körberepülhetjük többek között a Szent István Bazilikát, az Országházat, vagy a Halászbástyát is. Hatalmas kőtárgyra bukkantak Egyiptomban, az ősi birodalomról alkotott képünket alaposan átírja a páratlan lelet Promotions 2023-06-16 13:13:00 Tudomány Egyiptom Egyiptom titkait még mindig nem sikerült teljesen feltárni, a mostani felfedezés lehet a kulcsa egykori hatalmuknak. Bemutatkozott a ChatGPT kihívója, a Falcon40B Márkamonitor 2023-06-16 08:36:06 Infotech Mesterséges intelligencia ChatGPT AWS Az új mesterséges intelligencia alapú nyelvi modell tudása 1 billió lexikai elemre épül. Nyilvánosan is bemutatkozott és széles körben elérhetővé vált a világ első számú nyílt forráskódú nagy nyelvi modellje (Llm). A 40 milliárd paraméteres kapacitással rendelkező és az Amazon Web Services felhőalapú gépi tanulási platformja, az Amazon SageMaker-e Nem a harcrendszeren segített az id Software a Starfield fejlesztésében theGeek 2023-06-16 14:32:25 Gaming A Bethesda elárulta, hogy miben kapott támogatást a Doom és Wolfenstein alkotóitól: az id Software fejlesztőitől a sci-fi RPG-jéhez. A Starfield egyike a legjobban várt játékoknak 2023-ban, amely egy hatalmas űrkalandot ígér a Bethesda Game Studios-tól. A Skyrim és a Fallout sorozat mögött álló stúdió nem volt rest segítséget kérni más fejlesztőktő Most figyelj, ha nincs végtelen mobilneted First Class 2023-06-16 08:06:51 Mobiltech Mobilinternet Nagyon hasznos funkcióval jelentkezik az egyik zeneszolgáltató platform. Akiknek limitált a mobiladatuk, azok most örülni fognak. Egymillió eurós tőkeinjekciót kapott a Munkahelyeink.hu karrierportál Digital Hungary 2023-06-16 08:32:00 Gazdaság Beruházás Mesterséges intelligencia Egymillió euró értékű befektetést kapott a Munkahelyeink.hu munkavállalói és munkáltatói portált is üzemeltető nemzetközi WhereWeWork.com karrierportál hálózat. A vállalat a beruházást mesterséges intelligencia és gépi tanulási algoritmusainak továbbfejlesztésére, valamint újabb piacokon történő terjeszkedésre fogja fordítani. Hamarosan ismét útnak indíthatják a Starshipet IT Business 2023-06-16 07:09:09 Mobiltech Világűr SpaceX A SpaceX elnök-vezérigazgatójának közlése szerint másfél-két hónapon belül ismét megkísérlik útjára indítani a Csillaghajó második prototípustát, a Ship 25-öt. Mint arról korábban beszámoltunk, egy sikertelen indítás után a SpaceX április 20-án indította útnak a világ legnagyobb űrrakétáját, a Starshipet. Ugyan az indítás jól sikerült, a rakéta vég A technológusok az MI fejlesztésén, a tanácsadók meg az eladásán pörögnek Bitport 2023-06-16 09:49:00 Infotech Mesterséges intelligencia A McKinsey is előállt a maga teóriájával, hogy hol és mennyit hoz majd a világgazdaságnak a mesterséges intelligencia. Kevés szektor kaszál sokat.
In this episode of the Delphi podcast, host Tommy interviews Emad, the CEO of Stability.ai, an open source AI company dedicated to democratizing access to AI tech. Emad explains how his company handles the rapidly evolving field of AI and machine learning, including their curation filtering priority setting process and the importance of building all types of models. The discussion also touches on the potential threat of artificial general intelligence to humanity. [00:01:32] Artificial General Intelligence. [00:04:11] Misaligned AI and ISIS. [00:07:26] Need for decentralization in AI. [00:11:17] Optimizing data for aligned AI. [00:13:46] Culture-specific AI models. [00:17:53] Standardized models for intelligence. [00:21:19] OpenAI's objectives and actions. [00:24:07] AGI vs. productivity. [00:30:07] The era of big models. [00:30:51] AI's mind-boggling compression. [00:36:37] Education with AI tablets. [00:38:31] AI's emergent behavior. [00:41:02] AI and Government Control. [00:44:53] New definition of intelligence. [00:48:45] The AI Bubble. [00:52:23] Impact of AI on economy. Socials Emad's Twitter Tommy's Twitter Recent Articles to Note Forbes article against Stability.AI Emad's Response Disclosures Disclosures: This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast. Our current show features paid sponsorships which may be featured at the start, middle, and/or the end of the episode. These sponsorships are for informational purposes only and are not a solicitation to use any product, service or token. Delphi's transparency page can be viewed here. Keywords Stability.ai, open source AI, machine learning, text to image AI, artificial general intelligence.,AGI, dangers, AIs, humans, fake news,AI, education, healthcare, productivity gains, unemployment, panopticon, centralized control, propaganda, metaverse, regulations, government control, decentralization, distribution, Internet 2.,Google, Microsoft, Meta, AI, China Social Credit Score System.,entity, alignment, national models, data sets, AGI.,information rails, Reddit, structured data sets, AI, business model,data center, cloud, partnerships, revenue, open auditable AI,Google Search, standardized models, primitive building blocks, custom model creation, Amazon SageMaker.,models, business, culture, library, AGI,education, healthcare, AGI, automation, open source A.I.,Amazon, supercomputer, petaflops, ranking, infrastructure.,technology, deals, GPUs, super compute, partners,stable diffusion, clusters, optimization, speed, size,language, AI, productivity, scalability, data,GPT-4, compute, training, language models, productivity increase.,supercomputer, AI, training, terrorism, model weights,Iran, nuclear weapon, technology, AI, government.,GPUs, data, compute, intelligence, deep learning,protests, deep learning models, AGI, swarm intelligence, AI bubble,investing, business model, AI, finance, moats,AI startups, SaaS revenue, exit, Stability, Web3 chains, infrastructure, consolidation waves, CACLTVs, open side, production models.
Audio Podcast about using Generative AI on AWS. Includes background of Generative AI, Demo on AWS CodeWhisperer and Amazon SageMaker. For video version checkout https://www.youtube.com/watch?v=a4DTIUmjQuQ&t=1887s --- Send in a voice message: https://podcasters.spotify.com/pod/show/vishnu-vg/message
What's up everyone,If you haven't checked out our previous 3 episodes in our AI series you might want to before this episode, we give you a lot of context around some of the events that have happened and will shape the conversation today.So basically How fast could AI change or replace marketing jobs? How marketers can stay informed and become AI fluent Exploring new paths to future-proof your marketing career in the age of AI Today we're diving into specific tools… there's a lot of noise out there right now.What tools you should play around withIn TMW #107 | ChatGPT and the artificial marketer, Juan Mendoza explains that“...generative AI tools are already everywhere. From text generation to video and audio production, to image creation, there's a thriving industry of technologies taking small slices out of our creative talents, packaging them up, and selling them as a SaaS product on a recurring revenue model. If you're wanting to stay relevant five years from now in the marketing technology industry, you're probably going to have to learn some of these platforms. In 2010 we used to say: “there's an app for that”. In 2023, we will be saying: “there's an AI for that.””OutlineHere are some of the topics for this third AI episode: Key AI technology definitions and how to differentiate real AI tools vs all the noise out there Deep dive into tools Content marketing tools Email and marketing automation tools Predictive analytics tools Text to presentation and pitch deck tools 3D animation tools for product marketers Sales and outreach tools Text to website creator tools Ad and social creative tools AutoGPT and AI agents And a bunch of other tools like conversational search engines, 1-1 convos with celebrities and an even longer list of honorable mentions Here's today's main takeaway:The key to future proofing your marketing career with the ever changing AI landscape is to stay curious, get your hands dirty and experiment fearlessly: Fill out some forms, spin up free trials, get on wait lists, and give new AI tools a chance. It's only by actually getting your hands dirty that you'll discover which tools truly work for you and which are just part of the ever growing sea of gimmicky AI tools.Definition of tech termsI'll be using some of these terms throughout my analysis of some of these tools so here's a primer explaining the three most common AI technologies used for marketing applications: MLMachine Learning): ML is a way to teach computers to learn by themselves, without having to be programmed for every task. They learn from examples and data patterns to make predictions or decisions. Applications include segmentation, predictive analytics and propensity models. NLPNatural Language Processing: NLP is a subset of ML and focuses on enabling computers to understand, interpret, and generate human language. Includes sentiment analysis, machine translation, named entity recognition, text summarization, and more. NLP techniques usually helps computers understand and communicate with humans using everyday language. GNNGraph Neural Network: GNN also a subset of ML is a type of neural network that aims to handle graph-structured data, data organized like a network or web of connected points. Applications include analyzing relationships between different things like users in a social network or users in your database or recommending additional products based on past purchase history. Real AI vs noisePart of the reason AI gets a really bad rep, especially in martech, is that anything that's built on if statements or simple Javascript logic gets called AI. There's still plenty of AI startups that shout about their proprietary AI when it's probably just a few decision trees and a few interns running spreadsheets.Now though, you have an even bigger bucket of noise that's essentially “slight tweak on Chat-GPT”. Developing AI that was comparable to human performance was a challenging feat prior to GPT's arrival. To achieve this level of sophistication, a company would have had to: make a substantial investment, amounting to millions of dollars developing its own algorithms performing extensive data cleanup But it's so easy now because GPT is so good out of the box. Allen Cheng puts it simply. Starting a new AI venture can be achieved by simply assembling a few elements: a product developed on GPT-4's user-friendly API a website, and a marketing campaign. This is why we're seeing hundreds of AI tolls pop up every week.A lot of these GPT-based products are pretty much indistinguishable from one another. Maybe a handful have a significant advantage over others but most are gimmicky. And over the next few months, every tool is going to be integrating ChatGPT features inside their products in the hopes of making it stickier.The threat of GPT-nThe part that I find trickiest and the most discouraging about building anything on top of GPT is that any progress you make on fine tuning GPT-4 will totally be wiped out by GPT-5 or GPT-n… Kind of like we talked about in a previous episode with all the tools GPT's plugins killed. So let's cut through the noise and dive into legit AI tools, the ones you should be playing with and experimenting. Content marketing toolsCopy.ai and Jasperhttps://copy.ai/ https://jasper.ai/ AI text generators are very common these days, the two most popular tools, especially for marketers are Copy.ai and Jasper. Both allow you to bypass the initial stage of writing where you face a blank page. The promise of these tools is that they help you in generating ideas, saving time on brainstorming and drafting, and ensuring a consistent production flow, freeing you to focus on higher-level strategic tasks, original research, and connecting with your audience.I've played around with both Jasper and Copy.ai before ChatGPT came out… and they were super unique. But both Copy.ai and Jasper are built on top of GPT, they essentially rent usage of the platform. So they built a pretty nice UI on top of GPT… but now that ChatGPT came out, I'm sure they've seen a drop in usage. Plus GPT-4 is 3 times more expensive.They still offer marketing specific value though and can get you up to speed faster than using CGPT in the form of templates, prompts and workflows. Both are super powerful, you could make a case that Jasper outshines its counterpart a bit, especially on the longer content format but it's also way more expensive. Miguel Rebelo from Zapier has a solid breakdown comparison here https://zapier.com/blog/jasper-vs-copy-ai/ GramarlyGOhttps://www.grammarly.com/grammarlygo Grammarly, the popular spelling and grammar checker which has been using AI for over a decade already, also entered the generative AI niche last month unveiling GrammarlyGO. You guessed it, built on GPT. It has a particular advantage because Grammarly is already widely used and this is just an extra feature so to speak. Instead of just checking your grammar it can now also help you with drafting documents, composing and responding to emails, editing writing for tone, clarity, and length, and brainstorming ideas or outlines for projects within the apps you're already working in. Lots of tools are going the Grammarly route in adding GPT features to their product, like Notion and more recently Superhuman.Other AI writing toolsSome of these specialize in SEO, some in long form content, some in short form… they all do similar things:Copysmith https://copysmith.ai/ Anyword https://anyword.com/ Writesonic https://writesonic.com/Copymatic https://copymatic.ai/ Yaara https://www.yaara.ai/ Rytr https://rytr.me/ Frase https://frase.io/ EmailEmail is just a channel of potential generative content tools so it's not totally distinguishable from the tools we chatted about in the content category. Chances are that the Marketing Automation platform or the customer engagement platform you're already using as a suite of features they are throwing AI next to. Most of these though are just ML. Some tools like Iterable and Braze have propensity models to find users that are likely to perform a purchase event, that's ML, and it's only based on your data set. Some tools like Seventh Sense throw AI in front of Send Time Optimization features, these have been around for a long time and are only based on your recipients. This is also ML. Some tools throw AI in front of Sentiment Analysis features allowing you to analyze and classify the emotional tone of text into useful data. This is a subset of NLP that uses ML. Some tools like 6sense throw AI in front of conversational email assistants that are supposed to be a step up from static drip emails sequences. This is also a form of NLP and generative AI. You're likely to start seeing copy assistants and additional AI features powered on GPT inside of these tools. I wanted to chat about one product that I feel like stands out from others in terms of being built around AI rather than it simply being a feature on the side.Persadohttps://www.persado.com/ One example I've seen (but haven't used myself) of a CEP using more advanced AI is a tool called Persado. The two co-founders are former founders of Upstream in the mobile marketing space. Similar to 6sense's conversational email platform, they've been around for a decade and they claim to leverage NLP + ML to create, optimize, and personalize engagement messaging. So they essentially analyze a bunch of data and identify the most effective emotions, CTAs, language, phrases to drive engagement. It's worth calling it out because it's more than just predicting user behavior and optimizing the best time to send a message, it takes it a step further by also incorporating NLP techniques, understanding the nuances of human language, and generating custom marketing messages that resonate with a unique user. One thing that makes Persado unique is it's not built on GPT, it has its own model that's trained on more than 100 billion digital business language impressions across industries. Potentially less data points than GPT but arguably better and more relevant impressions. As Lisa Spira, VP of Content Intelligence at Persado explains in an interview with Martechseries, “models like OpenAI's ChatGPT are trained on the entire “Wild West” of the Internet, so their results, while delivered confidently by the model, can be riddled with inaccuracies, or even offensive content”. She adds that “Generative AI tools might be capable of writing anything, but we're able to cut through that noise, and train our generative AI to write in a way that motivates recipients to take actions: to open an email, convert on a shopping cart webpage, or stop scrolling and click on a social media ad.”And not just generative AI. Persado is pushing a term called Motivation AI and they have a really cool example of it. Let's say you're in D2C and you're selling sweaters. Generative AI gives you level 1 and 2 content which is Intelligibility and Relevance so it might spit out content like: Sweaters are a knitted garment that usually have long sleeves and are designed to be worn over other clothing In the fall, sweaters provide an extra layer of warmth in the cooler temperatures. They are stylish choice for fall fashion because they come in a variety of colors and styles But the next stage of this is including Emotion to get an output like this:You're going to love these cozy sweaters for fallAnd the following stage, where Persado claims to play is Motivation where you would get an output like this:A cozy invitation: indulge in our luxuriously soft sweaters this fallNow this might all be semantics. I'd argue that with great prompting you can get to generative content that includes motivation and emotion. Optimovehttps://www.optimove.com/ This tool could actually go in the CRM, the CDP or even the email category with the acquisitions they've made in the last few years but another name that I've heard a few times is Optimove. They play in the enterprise arena and primarily serve retailers and gaming operators but they have an interesting suite of AI tools for marketers. I've personally not used the tool but they claim to provide personalized recommendation campaign orchestration with their AI-based marketing bot (Optibot). It provides what they call self-optimizing campaigns that are essentially advanced A/B/n campaign testing that automatically serves the best treatment to individual customers through the use of ML. Source: https://www.optimove.com/pdf/Optimove-Product-Features.pdf Predictive AnalyticsPredictive analytics is a super cool niche of AI application. It essentially enables us to anticipate trends and patterns of consumer behavior based on a bunch of user data. Done right, you can do cool things like uncover users ready to buy amongst a sea of tire kickers, find free users primed for upsells and anticipate customers about to churn.Vidora Cortex (mParticle Intelligent Attributes)https://www.vidora.com/platform/ I wanted to start by calling out a few no-code / low-code predictive analytics / ML pipeline platforms. There's not a ton in this space but it has been growing over the past few years. Many of the big analytics players like Qlik's AutoML, IBM's Watson Studio, Amazon SageMaker, Google's AutoML, have a ML specific tool that does this but it's built for data scientists. Vidora is worth calling out here because it was built more for knowledge workers.Vidora was founded by 3 academics while pursuing their Ph.Ds Caltech and Berkeley, they built systems that utilized machine learning to convert raw data into informed decisions. Their initial vision was to democratize machine learning capabilities. They are a small 20 person startup in SF but their product offers huge potential, so much so that they were acquired by mParticle, a popular CDP. Vidora's platform, Cortex, provides you with the ability to build distinct pipelines for your customer data, so you can then produce various predictive attributes tailored for particular applications. For example if your goal is to identify the customers with the highest probability of purchasing your product in the next 30 days, the pipeline allows you to enhance your retargeting ROI by focusing on this user segment. Similar: https://www.pecan.ai/ https://www.akkio.com/ 6sensehttps://6sense.com/It's worth highlighting 6sense here as well. Not specifically an ML tool but it has a wider set of use cases that are a mix of intent data and predictive analytics and a slice of lead scoring… but they've been around for a decade. They use real-time data about your buyers to predict their buying intent and what their current purchasing stage might be. They uncover some of the magic in a blog post about their predictive engine here. They claim to be using: Intent data, web visits, content engagement (1st party) and 3rd party user behavior data from all over the Internet Historical data, events that led up to sales in the past ICP, firmographic and technographic data So they layer intent data on top of historical data and filter it through your ICP and the output is a target list of accounts that show intent to buy and are a good fit for your solution.The secret sauce here is really the 3rd party intent data. This is hidden data that they are able to assign to leads and accounts. Part of the source here is 6sense's proprietary intent network and other B2B review sites like G2, TrustRadius, and PeerSpot.Kumohttps://kumo.ai/ Founded by Jure Leskovec, former Stanford University computer science professor who's known for his bold assertion that AI would eventually be capable of predicting the future. He partnered with the former CTO of Pinterest and Airbnb as well as the former Head of Growth AI at LinkedIn to build Kumo.Using Kumo, companies can not only analyze past events but also predict future opportunities. Kumo uses GNNs (Graph Neural Networks) to identify patterns and relationships in complex data sets that cannot be easily analyzed using traditional statistical or machine learning techniques. This essentially allows marketers to anticipate customer behavior (how much they will spend, which new products they will be more interested in, things that would make them leave for a competitor) and offer personalized product recommendations, promotions, and communication.Predictive analytics isn't a new concept though. We talked a bit about this in our first episode when we mentioned propensity models which tons of larger companies employ today. But this is operationalizing it a step further and not just on your company's datasets. And imagine unlocking this capability for startups. So the idea is that marketing operations teams would change their focus to future customer behaviors. PresentationTomehttps://beta.tome.app/ There are countless AI productivity tools that are all super similar. But one that's been buzzing lately is Tome. They are founded by product leads from Facebook and Instagram, and recently got $43 million in their Series B funding round. They launched an innovative document-to-presentation AI tool, which utilizes GPT-4 to generate text and images and transform a document into compelling presentations, narratives, and stories. Tome's tile system sets it apart from other generative PowerPoint tools on the market and gives Microsoft a run for its money. Not only does it offer robust functionality, but it also boasts a sleek and impressive design.https://gamma.app/ 3D AnimationSpline AIhttps://spline.design/ai Spline AI was created by Alejandro León, it's a YC21 startup that's building a 3D design platform comparable to Figma, it basically allows you to convert text to 3D. For decades, creating in 3D has been super hard. Spline is changing this.You can easily create objects and scenes, edit objects, colors, and properties, add physics and randomness, create animations and events, generate style alternatives, collaborate with others in real-time, and much more. The applications for videos and product marketing are endless here, see a demo here. SalesAI-powered sales tools are popping up every week. Especially in the niche of email outreach. Personalization, subject line optimization, send-time optimization, sales rep coaching, auto suggest follow-up cadences… just a few of the areas where AI can enhance your email campaigns. NLP can analyze email responses and suggest appropriate next steps, helping your sales team respond efficiently. There's tons of players in this space like Cresta, Lyne, Regie. Cresta was funded out of the AI Grant program organized by some of the brightest tech minds in AI. Their founder, S. Zayd Enam chose to leave his PhD program at Stanford to establish the startup. They specifically provide AI guidance software and support that elevates each representative, behavioral mentoring of agents to enhance performance and locate solutions and areas to streamline monotonous tasks.AI Sales Email Coach. It assists you in real-time. Get more positive replies and write better emails faster.https://www.lavender.ai/ https://cresta.com/ https://lyne.ai/cold-email-ai/ https://www.regie.ai/ Web creatorsButternut AIhttps://butternut.ai/ Tagline says it all: Create a website for your business in 20 seconds with AI. Butternut.ai uses generative AI technology that allows users to create websites by simply answering text prompts. The AI technology is designed to function like a personal developer, providing users with an efficient and cost-effective solution for creating and editing websites.Butternut.ai is continuously improving its website builder and is currently working on its v2 version that will offer more design and editing functionalities. Users can even command the AI to perform specific tasks like creating a pricing page with a CTA to sign up.http://Stunning.so https://10web.io Ad and social creativeAlbert.aihttps://albert.ai/ Albert is an autonomous and self-learning digital marketing tool that uses ML + NLP as well as analytics to automate and optimize your paid ad campaigns, specifically Google's search and programmatic channels, as well as Facebook, Instagram, YouTube and Bing. It can automate bidding and budget allocation, keyword and audience targeting, as well as creative testing and optimization. So obviously the part that stands out here is that unlike many other campaign AI tools that just spit out recommendations and then a marketer takes the action, Albert claims to be one of the first tools that's an autonomous AI, it does the action also. Not only that, it's also making adjustments and improvements constantly. You seem to be able to set guardrails of course. They also claim the ability to identify trends, uncover new audience segments, and optimize ad placements. In their docs they say they are more useful in large data sets and B2C environments. AdCreativehttps://adcreative.ai/ AutoGPT and AI agentsI don't think we can go a full episode about AI tools without talking about AutoGPT and AI agents.Essentially you can assign an objective to an AI agent and they work on tasks that lead to accomplish this objective. It's making prompting a bit easier, instead of giving full instructions, the AI identifies the necessary steps to achieve your goal and some of the more advanced ones generate additional AI to assist. You may have seen this on Twitter, I think the first true demo was Yohei Nakajima's impressive demonstration of babyAGI.Things started blowing up with AutoGPT, released by SigGravitas just last week. Described as a self-running AI agent that can write its own code, heal itself from errors, and access the internet via Google search. It does sound really cool for several reasons: Internet access, long-term and short-term memory management, text generation, and integration with 11 Labs for AI text to speech generation. It's arguable that all of this will be possible with GPT plugins but this is doable today and has different applications, let's unpack a few examples:Social media is primed for automation, lots of the work here can already be automated. But imagine setting an AI agent loose with the objective of creating content, scheduling posts (not just posting them) and replying to comments and questions. Replying to questions is a huge part of customer service, you could argue this is another area that's already being automated with chat bots, especially the better ones connected to help docs. But imagine setting an AI agent loose with the objective of really understanding customer questions (not just picking up a keyword and recommending a help article), it could speak any language and you could train it on your brand tone.We're still early but there's already been a ton of innovation here.Here are some of the tools you can play around with today without reading a GitHub repo:Cognosyshttps://www.cognosys.ai/ Cognosys allows you to operate your personal AutoGPT Agent in-browser which is currently available and free to utilize. I've personally played around with it quite a bit and it's pretty wild, it's not perfect yet but it's really cool seeing how it will add new tasks to get to the objective. The beta is still on GPT-3.5. Here's my favorite example:Main objective was to: Build a list of AI tools that have a marketing application, that were released in the last 3 months. Here were the follow-up tasks it added to get to the objective: 1. Retrieve a list of AI tools that were released in the last 3 months. 2. Filter the list to only include tools that have a marketing application. 3. Compile the filtered list into a final list of AI tools with marketing applications released in the last 3 months. The final list of AI tools with marketing applications released in the last 3 months is as follows: 1. Receptiviti: A tool that uses AI to analyze consumer emotions and sentiment in order to optimize marketing strategies. 2. Cortex: A platform that uses AI to help marketers personalize content and improve customer engagement. 3. Adext AI: A tool that uses AI to optimize ad campaigns and improve ad performance. 4. Acoustic: An AI-powered marketing platform that helps businesses automate and optimize their marketing strategies. 5. Pathmatics: A tool that uses AI to provide insights into ad spending and campaign performance for competitive analysis. It then added a few extra tasks because it didn't seem satisfied with the list. Research and compile a list of AI tools that have a marketing application and were released in the last 6 months. The result was 6 tools, but not all 5 from the first task were there. Identify the unique features and functionalities of each AI tool on the list and create a comparison chart to evaluate which tools are most suitable for specific marketing applications. Conduct a survey of marketing professionals to gather feedback and opinions on the identified AI tools and their effectiveness in improving marketing strategies.But then it apologized and said it cannot perform that task… Research and compile a list of case studies showcasing the successful implementation of AI tools in marketing campaigns.It spun up a case study about Coke using "Albert" to optimize their social media advertising campaigns and Amazon using "Rekognition" to analyze customer behavior and personalize their shopping experience. Create a step-by-step guide on how to integrate an AI tool into a marketing strategy, including best practices and potential challenges.The step by step wasn't too bad. 1: Define the problem or opportunity, 2: Identify the right AI tool by researching and comparing different tools, 3: Understand the data requirements like type of data, format, frequency updates, 4: Prepare the data for integration like cleaning and formatting 5: Integrate the AI tool Research and compile a list of AI tools that have a marketing application and were released in the last 2 months.Not sure why it did this for 2 months but this time it gave me tools with release dates which was pretty cool but also obvious that I wasn't getting tools released in the last 2 months, I was only getting tools released in the last 2 months since GPT's latest batch of data which was mid 2021. Create a comparison chart to evaluate the unique features and functionalities of each AI tool on the list and determine which tools are most suitable for specific marketing applications. Also try: https://agentgpt.reworkd.ai/ https://app.hyperwriteai.com/earlyAccess https://godmode.space/ OtherRewind.aihttps://twitter.com/dsiroker/status/1646895452317700097?s=20 https://www.rewind.ai/ You may have seen their public investor pitch on Twitter, the founder is the former creator of Optimizely and his team built a way for you to record and store everything you've said or heard or seen and they make it searchable. Obviously there's huge privacy considerations with something like this. But people don't seem to care haha they went from 0 to 700k in ARR in 3 months. Perplexity AI https://www.perplexity.ai/ Perplexity is a startup that's changing the way we search the web. With their conversational search engine, you can ask questions in plain English and get accurate answers from various sources. And with $26 million in Series A funding, (including investments from Ed Gil) they're looking to revolutionize the search engine game.Their AI technology sets it apart from traditional search engines like Google, and their chatbot-like interface is user-friendly and intuitive, it is built on top of GPT.Perplexity's focus on accuracy is a breath of fresh air in an era where search engines can be manipulated by advertisers and search engine optimization. The Series A funds will be used to optimize their knowledge database and expand their reach. All in all, Perplexity AI is definitely one to watch in the coming years!Character AIhttps://beta.character.ai/ Imagine being able to have a one-on-one conversation with your favorite celebrity or fictional character - well, that's now a possibility with Character.ai, an innovative AI website created by two former Google engineers. The platform has been growing in popularity since its launch last September, offering users the opportunity to chat with a wide range of characters for free. What's more, the interactions are so seamless that some users have reported spending hours chatting with their chosen personality, almost forgetting that they're talking to a machine.However, there's a catch - the interactions are not real, and the AI's responses may not always be accurate or reliable. Despite this, Character.ai has been embraced by fans who are looking for new ways to engage with their favorite personalities, particularly online. This is especially true for fan fiction lovers who are excited about the tool's potential for creating new experiences and making the barrier to entry for writing fan fiction much lower.But as with any new technology, there are concerns about its impact on real-world relationships, especially if users spend more time on it than with the people they care about. Furthermore, the AI technology could be used by stans to go after a perceived enemy of their favorite star, which could be a problem if it leads to harmful interactions or behavior towards other users.Despite these concerns, Character.ai represents a new frontier for fan culture, offering a new way for fans to engage with their favorite personalities and characters. The platform's growth in popularity is a testament to the human need for connection, and while it may not substitute actual interactions, it provides a unique and exciting way for fans to engage with their favorite personalities.Magichttps://magic.dev/ Magic, a startup developing an AI-driven tool to help software engineers write, review, debug, and plan code changes, just raised $23 million in a funding round led by Alphabet's CapitalG. The platform's CEO, Eric Steinberger, was inspired by AI at a young age and is using his experience as an AI researcher to create a tool that will communicate with users in natural language, helping to improve the speed and cost of developing software.Steinberger claims that Magic can understand legacy code and collaborate with users on code changes, operating like a pair programmer that learns more about the context of both coding projects and developers. The AI colleague that Magic offers will be able to understand code and can help developers navigate it, allowing companies to scale the impact of their current employees and train new employees with less personal coaching.What sets it apart is that it allows developers to describe what they want in English, and the AI will understand it and collaborate with them on the changes. This means that developers can work with Magic like a colleague and send messages telling it what changes they want to be made to the code. This one isn't built on GPT, the company has developed its own neural network architecture that can rapidly read code and is capable of detecting, warning about and overcoming potential bugs in the code. Honorable mentionsGalileo AI - Create editable UI designs from text descriptionhttps://usegalileo.ai/ Notocat - Write your newsletters in Notion and send them to your subscribershttps://twitter.com/razvanilin/status/1643565644061999104 Brainf FM - music made by AI that's scientifically proven to increase focushttps://www.brain.fm/ Meeting notes and transcription appshttps://tldv.io/ https://vowel.com/ Image Vectorizer - Turn small images to vectorhttps://vectorizer.io/Speech-to-text generator for podcasts that creates notes, timestamps and summary contenthttps://castmagic.io/ https://podsqueeze.com/ Text-to-speech AI voice generatorhttps://murf.ai/ https://beta.elevenlabs.io/ Text-to-music AI music generator https://mubert.com/ Text-to-SQL query, connect your database, ask a question, get an answerhttps://usechannel.com/ Teachable's AI course curriculum generatorhttps://teachable.com/ai-curriculum-generator https://twitter.com/teachable/status/1641154548419969048?s=20 The opportunities are endless with AI tools and applications go far beyond marketing. I see too many people on Twitter dunking on using it to help you generate text or as a search engine or fact checker… and they're missing the big picture. People are using AI to Help them build custom meal plans Custom exercise plans Help them build sleeping plans Help them build routines and schedules with their newborns Planning road trips Planning date ideas Acting as a therapist Getting movie and book recommendations Planning a party or event Designing personal budgets Assisting with resume and cover letter writing Summarizing long articles and youtube videos Write SQL queries Explain python and CSS code in plain English ConclusionIt's not like our marketing jobs are gonna vanish overnight, but the shift is happening faster than many of us realize. AI's no longer just a loosely backed buzzword; it's doing things today that we used to think were impossible. So, as marketers, we've gotta take this tech seriously.There's 4 main things marketers should be doing to future-proof their careers: Instead of asking if AI's gonna replace our roles in marketing, we should be talking about how quickly it could happen and what it'll look like if it does. Staying informed and learning from some of the best podcasts and newsletters about AI. Ben's Bites, No Priors Podcast, A Guide for Thinking Humans and the AI Exchange are great resources. Now is the time to figure out if you need to make changes to your current area of speciality. Ask yourself if you should double down on additional areas like data and API services, getting closer to product and customers or starting to learn about ethics and data privacy. Stay curious, get your hands dirty and experiment fearlessly: Fill out some forms, spin up free trials, get on wait lists, and give new AI tools a chance. It's only by actually getting your hands dirty that you'll discover which tools truly work for you and which are just part of the ever growing sea of gimmicky AI tools. ✌️--Intro music by Wowa via UnminusCover art created with Midjourney
The possibilities for AI applications can be hugely inspiring and seem practically boundless, but as an organization, it's critical that you define your strategy before you begin to implement these technologies. In today's episode, we attend the National Association of Broadcasters (NAB) Show of 2023, where we catch up with three remarkable companies that are leveraging the power of AI to revolutionize their operations. Tuning in you'll learn about the exciting developments at Dalet, Stats Perform, and Amazon AWS, along with the innovative products they're showcasing at the NAB Show of 2023. During our interviews, we unpack how they are leveraging AI to enhance data and analytics capabilities, transform workflows for their customers, and more. We also meet up with Veritone's SVP and General Manager of Commercial Enterprise, Sean King, to discuss his thoughts on the exciting prospects of generative AI, its applications in business, and his advice for companies looking to get into this field. The NAB Show of 2023 was a thrilling opportunity to gain exposure to all of the up-and-coming developments in the AI space and chat with highly knowledgeable individuals about their current projects. Tune in to learn more about the exciting developments taking place and what you can expect from the not-too-distant future! Veritone disclaims any responsibility for any statement of participants in the podcast. The views expressed in this podcast are those of the interviewee and do not necessarily reflect the views of Veritone or its directors, officers, or employees.Key Points From This Episode:An overview of today's guests at the National Association of Broadcasters Show 2023.Meet Aaron Kroger, Product Marketing Manager for Media Workflow at Dalet.What Dalet is showcasing at the NAB show and their partnership with Veritone and Flex.How Dalet is using AI and machine learning to enhance data and analytics capabilities.Dalet's vision for the future of the industry and how they see their technology and services evolving in the coming years.Introducing Ronen Nandi, Director of Business Development at Stats Perform.The latest developments at Stats Perform and a breakdown of their Opta Vision project.Hear the latest from Brian Kenworthy, Principle Solutions Architect at Amazon AWS.Learn about Amazon SageMaker and the services they offer.Brian's experience using Veritone Voice and how it impressed him.The term generative AI and a closer look at what it entails.How AI is expected to disrupt the job market and increase innovation.Sean's parting advice to businesses looking to explore the world of generative AI.Links Mentioned in Today's Episode:Sean King on LinkedInBrian Kenworthy on LinkedInAaron Kroger on LinkedInRonen Nandi on LinkedInVeritoneVeritone VoiceOpenAIChatGPTCameo KidsMagen Mintchev on LinkedInAdventures in AI PodcastFollow Veritone on Facebook, Instagram, LinkedIn, and Twitter!
Machine Learning is critical for software teams. Adam Starr, Senior Global Engineering Manager, Data Science at Proofpoint, shares their journey to successfully building and deploying AI/ML solutions using Amazon SageMaker.Topics Include:Adam Starr, Senior Global Engineering Manager, Data Science at Proofpoint IncHow Proofpoint leverages AI in security modelML Ops leverages AWS SageMakerAWS Innovation – Going from idea to prototype to production (Sagemaker Studio, Ground Truth, GitPush, Sagemaker Pipeline, Sagemaker Endpoints)Critical success factors and which AWS services to usePartnering with AWS on Machine Learning Operational platform
Links: Amazon CloudFront now supports the removal of response headers Amazon SageMaker is now available in AWS Middle East (UAE) Region Amazon Neptune announces graph-explorer, an open-source visual exploration tool for low-code users An elastic deployment of Stable Diffusion with Discord on AWS Measure the Business Impact of Personalize Recommendations How Heineken's Connected Brewery Ecosystem fuels automation
James is back from his Vegas trip for AWS re:Invent! In this episode, they discuss James' impression of the city, the hot topics of the talks and James' exploration of the variety of American fast food joints. 00 - What Vegas is like 01 - Elastic vs Serverless 02 - Event overview 06 - Redshift Serverless 07 - Zero ETL 08 - Aurora into Redshift 09 - DataZone 10 - QuickSight Q 11 - Keynote: Swami Sivasubramanian Fully managed databases AWS Glue Data Quality The new geospatial capabilities in Amazon SageMaker allow data scientists to predict natural disasters, manage relief efforts, and navigate roads using satellite imagery and pre-trained neural networks. DataZone, governance for data lakes, data viruses, and machine learning 15 - Keynote: Dr. Werner Vogels Parallelism and concurrency Event-driven architectures AWS Application Composer Resources: * https://reinvent.awsevents.com/ (https://reinvent.awsevents.com/) * Keynotes (https://reinvent.awsevents.com/keynotes/) Find out more about Stac and Parallax: * Stac (https://stac.works) * Parallax (https://parall.ax)
Tijdens AWS re:Invent 2022 hield AWS maar liefst vier twee uur durende keynotes waarin het nieuwe producten en productupdates aankondigde. Het aantal updates is overweldigend en laat zien waarom AWS wereldwijd marktleider is. In deze aflevering van Techzine Talks proberen we wat duiding te brengen in de vele updates.Opvallend was dat tijdens de keynote van Adam Selipsky, de CEO van AWS, helemaal aan het einde van zijn presentatie de belangrijkste aankondiging van het event zat verstopt. De tweede echte business applicatie van AWS, naast Amazon Connect is er nu ook AWS Supply Chain. De spierballentaal en machtsvertoon was vooral heel groot rondom compute en data. AWS domineert de markt qua keuze aan instances en chips. Ook op het gebied van data heeft het een extreem aanbod aan databases en analytics tools. Als we nog dieper hierop inzoomen en dan voornamelijk op machine learning, dan heeft AWS een unieke positie. Niemand heeft machine learning tot op heden laagdrempeliger gemaakt dan AWS. Meer over dit alles in deze aflevering van Techzine Talks.
Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders
In today's episode, we have Romina Sharifpour, Machine Learning Specialist at Amazon Web Services (AWS). Operationalising machine learning models, particularly scaling MLOps capability across teams within an organisation is a difficult feat. Join Romina to find out how you can easily accelerate your MLOps journey using Amazon SageMaker Pipelines. You'll gain insights into how AWS customer Carsales keeps up with increased demand in building and productionising AI models, and their strategy to democratise AI across the whole development teams. This allows any developer to be a citizen data scientist and ML engineer by leveraging Amazon SageMaker. Enjoy the show! If you want to learn more about building modern applications on AWS and attend a virtual conference, just google “AWS Innovate” or click the link below. https://aws.amazon.com/events/aws-innovate/apj/modern-apps/ Thank you to our sponsor, Talent Insights Group! Join us in Melbourne for Scaling AI with MLOPS: https://www.datafuturology.com/mlops Join our Slack Community: https://join.slack.com/t/datafuturologycircle/shared_invite/zt-z19cq4eq-ET6O49o2uySgvQWjM6a5ng Read the full podcast episode summary here. --- Send in a voice message: https://anchor.fm/datafuturology/message
Cosa significa MLOps? E quali sono le somiglianze e le differenze con DevOps? Quali strumenti per l'automazione di workflow di machine learning sono a disposizione su AWS e all'interno di Amazon SageMaker? In questo episodio ospito Paolo di Francesco, Sr. Solutions Architect di AWS, per parlare di servizi e soluzioni come SageMaker Studio, SageMaker Pipelines, SageMaker Projects, SageMaker Monitoring e tanto altro. Link: MLOps on AWS. Link: SageMaker Studio. Link: SageMaker Pipelines. Link: SageMaker Projects.
Want to give your ears a break and read this as an article? You're looking for this link. https://www.lastweekinaws.com/blog/sagemaker_is_responsible_for_my_surprise_bill/Want to watch the full dramatic reenactment of this podcast? Watch the YouTube Video here: https://youtu.be/LCZjSZhRAjsNever miss an episode Join the Last Week in AWS newsletter Subscribe wherever you get your podcasts Help the show Leave a review Share your feedback Subscribe wherever you get your podcasts Buy our merch https://store.lastweekinaws.comWhat's Corey up to? Follow Corey on Twitter (@quinnypig) See our recent work at the Duckbill Group Apply to work with Corey and the Duckbill Group to help lower your AWS bill
In part two, Dave chats again with Francesco Pochetti, Senior Machine Language Engineer at Bolt, and an AWS Machine Learning Hero. In this episode, Francesco dives deep in the ML tools on AWS, starting with the tools such as NVIDIA Triton and TensorRT, and how to improve processing time for Computer Vision. He also covers Amazon SageMaker, and many other AWS ML services as well as deploying ML models using Docker in the best way possible. If you missed it, you could listen to part one of this conversation in Episode 045. Francesco on Twitter: https://twitter.com/Fra_Pochetti Dave on Twitter: https://twitter.com/thedavedev Francesco's Website: https://francescopochetti.com/ Francesco's LinkedIn: https://www.linkedin.com/in/francescopochetti/ Francesco's GitHub: https://github.com/FraPochetti [BLOG] Blurry faces: Training, Optimizing and Deploying a segmentation model on Amazon SageMaker with NVIDIA TensorRT and NVIDIA Triton - https://francescopochetti.com/blurry-faces-a-journey-from-training-a-segmentation-model-to-deploying-tensorrt-to-nvidia-triton-on-amazon-sagemaker/ [BLOG] Machine Learning and Developing inside a Docker Container in Visual Studio Code https://francescopochetti.com/developing-inside-a-docker-container-in-visual-studio-code/ [BLOG] Deploying a Fashion-MNIST web app with Flask and Docker: https://francescopochetti.com/deploying-a-fashion-mnist-web-app-with-flask-and-docker/ [BLOG] IceVision meets AWS: detect LaTeX symbols in handwritten math and deploy with Docker on Lambda: https://francescopochetti.com/icevision-meets-aws-detect-latex-symbols-in-handwritten-math-and-deploy-with-docker-on-lambda/ [DOCS] Amazon Rekognition - https://aws.amazon.com/rekognition/ [DOCS] Amazon SageMaker - https://aws.amazon.com/sagemaker/ [DOCS] Amazon Textract - https://aws.amazon.com/textract/ [DOCS] Deploy fast and scalable AI with NVIDIA Triton Inference Server in Amazon SageMaker https://aws.amazon.com/blogs/machine-learning/deploy-fast-and-scalable-ai-with-nvidia-triton-inference-server-in-amazon-sagemaker/ [GIT] Nvidia Triton Inference Server: https://github.com/triton-inference-server/server/ [GIT] Blurry faces: Training, Optimizing and Deploying a segmentation model on Amazon SageMaker with NVIDIA TensorRT and NVIDIA Triton - https://github.com/FraPochetti/KagglePlaygrounds/tree/master/triton_nvidia_blurry_faces Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud
После очередного выпуска подкаста, слушатели предложили тему ML, а именно рассказать что такое MLOps и как правильно подбирать EC2 инстанс для ML задач. Дима Лаптев (Solution Architect) подключился к обсуждению, и мы в деталях разобрали: Что такое MLOps? Из каких фаз строиться MLOps - какие шаги включены в CI/CD. Как именно он связан с DevOps инженерами, и насколько глубоко надо знать ML, чтобы настроить MLOps на проекте. Как реализовать MLOps в AWS. MLOps настроен, готовы к продакшену, но какие машины лучше подходят для ML задач. Как правильно инстанс для ML нагрузок. Полезные ссылки Build MLOps workflows with Amazon SageMaker projects, GitLab, and GitLab pipelines MLOps Workload Orchestrator Implementations Amazon EC2 Instance Types Amazon SageMaker Pricing - Machine Learning Если у вас есть вопросы, предложения темы, пишите мне в Linkedin или телеграмм https://t.me/VictorVedmich
In this episode, Dave chats with Francesco Pochetti, Senior Machine Language Engineer at Bolt, and an AWS Machine Learning Hero. Francesco covers his career start as a chemist, his journey into a career of Data Science, and how Computer Vision technology is handling some of the most difficult Machine Learning problems today. Francesco on Twitter: https://twitter.com/Fra_Pochetti Dave on Twitter: https://twitter.com/thedavedev Francesco's Website: https://francescopochetti.com/ Francesco's LinkedIn: https://www.linkedin.com/in/francescopochetti/ Francesco's GitHub: https://github.com/FraPochetti [BLOG] Blurry faces: Training, Optimizing and Deploying a segmentation model on Amazon SageMaker with NVIDIA TensorRT and NVIDIA Triton: https://francescopochetti.com/blurry-faces-a-journey-from-training-a-segmentation-model-to-deploying-tensorrt-to-nvidia-triton-on-amazon-sagemaker/ [BLOG] Machine Learning and Developing inside a Docker Container in Visual Studio Code: https://francescopochetti.com/developing-inside-a-docker-container-in-visual-studio-code/ [BLOG] Deploying a Fashion-MNIST web app with Flask and Docker: https://francescopochetti.com/deploying-a-fashion-mnist-web-app-with-flask-and-docker/ [BLOG] IceVision meets AWS: detect LaTeX symbols in handwritten math and deploy with Docker on Lambda: https://francescopochetti.com/icevision-meets-aws-detect-latex-symbols-in-handwritten-math-and-deploy-with-docker-on-lambda/ [DOCS] Amazon Rekognition: https://aws.amazon.com/rekognition/ [DOCS] Amazon SageMaker: https://aws.amazon.com/sagemaker/ [DOCS] Amazon Textract: https://aws.amazon.com/textract/ [DOCS] Deploy fast and scalable AI with NVIDIA Triton Inference Server in Amazon SageMaker: https://aws.amazon.com/blogs/machine-learning/deploy-fast-and-scalable-ai-with-nvidia-triton-inference-server-in-amazon-sagemaker/ [GIT] Nvidia Triton Inference Server: https://github.com/triton-inference-server/server/ [GIT] Blurry faces: Training, Optimizing and Deploying a segmentation model on Amazon SageMaker with NVIDIA TensorRT and NVIDIA Triton: https://github.com/FraPochetti/KagglePlaygrounds/tree/master/triton_nvidia_blurry_faces Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud
In this episode, Dave continues his chat with Luca Bianchi, Chief Technology Officer at Neosperience and WizKey. Luca covers his career in Machine Learning, an overview of Machine Learning on AWS, Amazon Rekognition, Amazon Comprehend, Amazon Sagemaker, and AWS Glue. Luca also discusses current 2022 ML trends including Transformers, and Edge Computing. Luca on Twitter: twitter.com/bianchiluca Dave on Twitter: twitter.com/thedavedev Luca on LinkedIn: www.linkedin.com/in/lucabianchipavia/ Luca on Medium: medium.com/@aletheia Amazon Rekognition: aws.amazon.com/rekognition/ Amazon Comprehend: aws.amazon.com/comprehend/ Amazon Sagemaker: aws.amazon.com/sagemaker/ AWS Glue: aws.amazon.com/glue Towards Data Science: towardsdatascience.com/ Clean or Dirty HVAC? Using Amazon SageMaker and Amazon Rekognition Custom Labels to Automate Detection: bit.ly/3rvpNDJ Building a Neural Network on Amazon SageMaker with PyTorch Lightning: bit.ly/3xAmxLc ---------------------------------------------- Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
In this episode, Dave chats with Luca Bianchi, Chief Technology Officer at Neosperience and WizKey. Luca covers his career in Machine Learning, an overview of Machine Learning on AWS, Amazon Rekognition, Amazon Comprehend, Amazon Sagemaker, and AWS Glue. Luca also discusses Edge ML and what can be achieved using commodity hardware. Luca on Twitter: https://twitter.com/bianchiluca Dave on Twitter: https://twitter.com/thedavedev Luca on LinkedIn: https://www.linkedin.com/in/lucabianchipavia/ Luca on Medium: https://medium.com/@aletheia Amazon Rekognition: https://aws.amazon.com/rekognition/ Amazon Comprehend: https://aws.amazon.com/comprehend/ Amazon Sagemaker: https://aws.amazon.com/sagemaker/ AWS Glue: https://aws.amazon.com/glue Towards Data Science: https://towardsdatascience.com/ Clean or Dirty HVAC? Using Amazon SageMaker and Amazon Rekognition Custom Labels to Automate Detection: https://bit.ly/3rvpNDJ Building a Neural Network on Amazon SageMaker with PyTorch Lightning: https://bit.ly/3xAmxLc Subscribe: Amazon Music: https://music.amazon.com/podcasts/f8bf7630-2521-4b40-be90-c46a9222c159/aws-developers-podcast Apple Podcasts: https://podcasts.apple.com/us/podcast/aws-developers-podcast/id1574162669 Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zb3VuZGNsb3VkLmNvbS91c2Vycy9zb3VuZGNsb3VkOnVzZXJzOjk5NDM2MzU0OS9zb3VuZHMucnNz Spotify: https://open.spotify.com/show/7rQjgnBvuyr18K03tnEHBI TuneIn: https://tunein.com/podcasts/Technology-Podcasts/AWS-Developers-Podcast-p1461814/ RSS Feed: https://feeds.soundcloud.com/users/soundcloud:users:994363549/sounds.rss
Voices' CEO, David Ciccarelli, joins Coruzant Technologies for the Digital Executive podcast. He shares how he delivers voice talent to his business clients in this online marketplace. Delivering this with better, faster matches leveraging the power of machine learning, using tools like Amazon SageMaker.
Edo Liberty: How Vector Data Is Changing The Way We Recommend Everything [Audio] Podcast: Play in new window | DownloadSubscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSSEdo Liberty is the CEO of Pinecone, a company hiring exceptional scientists and engineers to solve some of the hardest and most impactful machine learning challenges of our times. Edo also worked at Amazon Web Services where he managed the algorithms group at Amazon AI. As Senior Manager of Research, Amazon SageMaker, Edo and his team built scalable machine learning systems and algorithms used both internally and externally by customers of SageMaker, AWS's flagship machine learning platform. Edo served as Senior Research Director at Yahoo where he was the head of Yahoo's Independent Research in New York with focus on scalable machine learning and data mining for Yahoo critical applications.Edo is a Post Doctoral Research fellow in Applied Mathematics from Yale University. His research focused on randomized algorithms for data mining. In particular: dimensionality reduction, numerical linear algebra, and clustering. He is also interested in the concentration of measure phenomenon. Please support this podcast by checking out our sponsors:Episode Links: Edo Liberty LinkedIn: https://www.linkedin.com/in/edo-liberty-4380164/ Edo Liberty Twitter: https://twitter.com/pinecone Edo Liberty Website: https://www.pinecone.io Podcast Details: Podcast website: https://www.humainpodcast.com Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009 Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9 YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos Support and Social Media: – Check out the sponsors above, it's the best way to support this podcast– Support on Patreon: https://www.patreon.com/humain/creators – Twitter: https://twitter.com/dyakobovitch – Instagram: https://www.instagram.com/humainpodcast/ – LinkedIn: https://www.linkedin.com/in/davidyakobovitch/ – Facebook: https://www.facebook.com/HumainPodcast/ – HumAIn Website Articles: https://www.humainpodcast.com/blog/ Outline: Here's the timestamps for the episode: (06:02)- It's funny how being a scientist and building applications and building platforms are so different. It's kind of like for me it's just by analogy, I mean, kind of a scientist, if you're looking at some achievement, like technical achievement as being a top of a mountain and a scientist is trying to like hike, they're trying to be the first person to the summit. (06:28)- When you build an application, you kind of have to build a road, you have to be able to drive them with a car. And when you're building a platform on AWS or at Pinecone, you have to like build a city there. You have to really like, completely like to cover it. For me, the experience of building platforms and AWS was transformational because the way we think about problems is completely different. It's not about proving that something is possible, it is building the mechanisms that make it possible always for, in any circumstance. (13:43)- And so on and today with machine learning, you don't really have to do any of that. You have pre-trained NLP models that convert a string, like a, take a sentence in English to an embedding, to a high dimensional vector, such that the similarity or either the distance or the angle between them is analogous to the similarity between them in terms of like conceptual smelts semantic similarity.(18:17)- Almost always Pinecone ends up being a lot easier, a lot faster and a lot more production ready than what they would build in house. A lot more functional. We've spent two and a half years now baking a lot of really great features into Pinecone. And we're, we've just launched a version 2.0 that contains all sorts of filtering capabilities and cost reduction measures and you name it. (21:22)- And so I'm a great believer in knowing your own data and knowing your own customers and training your own models. It doesn't mean that you have to train them from scratch. It doesn't mean you don't have to use the right tools. You don't have to reinvent the wheel, but I'm not a big believer in completely pre-trained, plucked off of a random place in the internet models. I do want to say that there are great models for just feature engineering for objects that don't change so much. So we have language models like BERT that transform text and create great embeddings and they're a good starting point. (31:01)- So I think you'll see two things. First of all, with Pinecone specifically, we're focused on really only two things; making it easy to use and get value out of Pinecone and making it cheaper. That's it! I mean that, those are the only two things we care about. Like if you can get a ton of value out of it and it doesn't cost you too much, that's it, you're a happy customer and we're happy to get you there. So that pretty much sums up all of our focus. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Pull your podcast player out of instant retrieval, because we're discussing re:Invent 2021 as well as the weeks before it. Lots of announcements; big, small, weird, awesome, and anything in between. We had fun with this episode and hope you do too. Find us at melb.awsug.org.au or as @AWSMelb on Twitter. News Finally in Sydney AWS Snowcone SSD is now available in the US East (Ohio), US West (San Francisco), Asia Pacific (Singapore), Asia Pacific (Sydney) and AWS Asia Pacific (Tokyo) regions Amazon EC2 M6i instances are now available in 5 additional regions Serverless Introducing Amazon EMR Serverless in preview Announcing Amazon Kinesis Data Streams On-Demand Announcing Amazon Redshift Serverless (Preview) Introducing Amazon MSK Serverless in public preview Introducing Amazon SageMaker Serverless Inference (preview) Simplify CI/CD Configuration for AWS Serverless Applications and your favorite CI/CD system – General Availability Amazon AppStream 2.0 launches Elastic fleets, a serverless fleet type AWS Chatbot now supports management of AWS resources in Slack (Preview) Lambda AWS Lambda now supports partial batch response for SQS as an event source AWS Lambda now supports cross-account container image pulling from Amazon Elastic Container Registry AWS Lambda now supports mTLS Authentication for Amazon MSK as an event source AWS Lambda now logs Hyperplane Elastic Network Interface (ENI) ID in AWS CloudTrail data events Step Functions AWS Step Functions Synchronous Express Workflows now supports AWS PrivateLink Amplify Introducing AWS Amplify Studio AWS Amplify announces the ability to override Amplify-generated resources using CDK AWS Amplify announces the ability to add custom AWS resources to Amplify-created backends using CDK and CloudFormation AWS Amplify UI launches new Authenticator component for React, Angular, and Vue AWS Amplify announces the ability to export Amplify backends as CDK stacks to integrate into CDK-based pipelines AWS Amplify expands its Notifications category to include in-app messaging (Developer Preview) AWS Amplify announces a redesigned, more extensible GraphQL Transformer for creating app backends quickly Containers Fargate Announcing AWS Fargate for Amazon ECS Powered by AWS Graviton2 Processors ECS Amazon ECS now adds container instance health information Amazon ECS has improved Capacity Providers to deliver faster Cluster Auto Scaling Amazon ECS-optimized AMI is now available as an open-source project Amazon ECS announces a new integration with AWS Distro for OpenTelemetry EKS Amazon EKS on AWS Fargate now Supports the Fluent Bit Kubernetes Filter Amazon EKS adds support for additional cluster configuration options using AWS CloudFormation Visualize all your Kubernetes clusters in one place with Amazon EKS Connector, now generally available AWS Karpenter v0.5 Now Generally Available AWS customers can now find, subscribe to, and deploy third-party applications that run in any Kubernetes environment from AWS Marketplace Other Amazon ECR announces pull through cache repositories AWS App Mesh now supports ARM64-based Envoy Images EC2 & VPC Instances New – EC2 Instances (G5) with NVIDIA A10G Tensor Core GPUs | AWS News Blog Announcing new Amazon EC2 G5g instances powered by AWS Graviton2 processors Introducing Amazon EC2 R6i instances Introducing two new Amazon EC2 bare metal instances Amazon EC2 Mac Instances now support hot attach and detach of EBS volumes Amazon EC2 Mac Instances now support macOS Monterey Announcing Amazon EC2 M1 Mac instances for macOS Announcing preview of Amazon Linux 2022 Elastic Beanstalk supports AWS Graviton-based Amazon EC2 instance types Announcing preview of Amazon EC2 Trn1 instances Announcing new Amazon EC2 C7g instances powered by AWS Graviton3 processors Announcing new Amazon EC2 Im4gn and Is4gen instances powered by AWS Graviton2 processors Introducing the AWS Graviton Ready Program Introducing Amazon EC2 M6a instances AWS Compute Optimizer now offers enhanced infrastructure metrics, a new feature for EC2 recommendations AWS Compute Optimizer now offers resource efficiency metrics Networking AWS price reduction for data transfers out to the internet Amazon Virtual Private Cloud (VPC) customers can now create IPv6-only subnets and EC2 instances Application Load Balancer and Network Load Balancer end-to-end IPv6 support AWS Transit Gateway introduces intra-region peering for simplified cloud operations and network connectivity Amazon Virtual Private Cloud (VPC) announces IP Address Manager (IPAM) to help simplify IP address management on AWS Amazon Virtual Private Cloud (VPC) announces Network Access Analyzer to help you easily identify unintended network access Introducing AWS Cloud WAN Preview Introducing AWS Direct Connect SiteLink Other Recover from accidental deletions of your snapshots using Recycle Bin Amazon EBS Snapshots introduces a new tier, Amazon EBS Snapshots Archive, to reduce the cost of long-term retention of EBS Snapshots by up to 75% Amazon CloudFront now supports configurable CORS, security, and custom HTTP response headers Amazon EC2 now supports access to Red Hat Knowledgebase Amazon EC2 Fleet and Spot Fleet now support automatic instance termination with Capacity Rebalancing AWS announces a new capability to switch license types for Windows Server and SQL Server applications on Amazon EC2 AWS Batch introduces fair-share scheduling Amazon EC2 Auto Scaling Now Supports Predictive Scaling with Custom Metrics Dev & Ops New services Measure and Improve Your Application Resilience with AWS Resilience Hub | AWS News Blog Scalable, Cost-Effective Disaster Recovery in the Cloud | AWS News Blog Announcing general availability of AWS Elastic Disaster Recovery AWS announces the launch of AWS AppConfig Feature Flags in preview Announcing Amazon DevOps Guru for RDS, an ML-powered capability that automatically detects and diagnoses performance and operational issues within Amazon Aurora Introducing Amazon CloudWatch Metrics Insights (Preview) Introducing Amazon CloudWatch RUM for monitoring applications' client-side performance IaC AWS announces Construct Hub general availability AWS Cloud Development Kit (AWS CDK) v2 is now generally available You can now import your AWS CloudFormation stacks into a CloudFormation stack set You can now submit multiple operations for simultaneous execution with AWS CloudFormation StackSets AWS CDK releases v1.126.0 - v1.130.0 with high-level APIs for AWS App Runner and hotswap support for Amazon ECS and AWS Step Functions SDKs AWS SDK for Swift (Developer Preview) AWS SDK for Kotlin (Developer Preview) AWS SDK for Rust (Developer Preview) CICD AWS Proton now supports Terraform Open Source for infrastructure provisioning AWS Proton introduces Git management of infrastructure as code templates AWS App2Container now supports Jenkins for setting up a CI/CD pipeline Other Amazon CodeGuru Reviewer now detects hardcoded secrets in Java and Python repositories EC2 Image Builder enables sharing Amazon Machine Images (AMIs) with AWS Organizations and Organization Units Amazon Corretto 17 Support Roadmap Announced Amazon DevOps Guru now Supports Multi-Account Insight Aggregation with AWS Organizations AWS Toolkits for Cloud9, JetBrains and VS Code now support interaction with over 200 new resource types AWS Fault Injection Simulator now supports Amazon CloudWatch Alarms and AWS Systems Manager Automation Runbooks. AWS Device Farm announces support for testing web applications hosted in an Amazon VPC Amazon CloudWatch now supports anomaly detection on metric math expressions Introducing Amazon CloudWatch Evidently for feature experimentation and safer launches New – Amazon CloudWatch Evidently – Experiments and Feature Management | AWS News Blog Introducing AWS Microservice Extractor for .NET Security AWS Secrets Manager increases secrets limit to 500K per account AWS CloudTrail announces ErrorRate Insights AWS announces the new Amazon Inspector for continual vulnerability management Amazon SQS Announces Server-Side Encryption with Amazon SQS-managed encryption keys (SSE-SQS) AWS WAF adds support for Captcha AWS Shield Advanced introduces automatic application-layer DDoS mitigation Security Hub AWS Security Hub adds support for AWS PrivateLink for private access to Security Hub APIs AWS Security Hub adds three new FSBP controls and three new partners SSO Manage Access Centrally for CyberArk Users with AWS Single Sign-On Manage Access Centrally for JumpCloud Users with AWS Single Sign-On AWS Single Sign-On now provides one-click login to Amazon EC2 instances running Microsoft Windows AWS Single Sign-On is now in scope for AWS SOC reporting Control Tower AWS Control Tower now supports concurrent operations for detective guardrails AWS Control Tower now supports nested organizational units AWS Control Tower now provides controls to meet data residency requirements Deny services and operations for AWS Regions of your choice with AWS Control Tower AWS Control Tower introduces Terraform account provisioning and customization Data Storage & Processing Databases Relational databases Announcing Amazon RDS Custom for SQL Server New Multi-AZ deployment option for Amazon RDS for PostgreSQL and for MySQL; increased read capacity, lower and more consistent write transaction latency, and shorter failover time (Preview) Amazon RDS now supports cross account KMS keys for exporting RDS Snapshots Amazon Aurora supports MySQL 8.0 Amazon RDS on AWS Outposts now supports backups on AWS Outposts Athena Amazon Athena adds cost details to query execution plans Amazon Athena announces cross-account federated query New and improved Amazon Athena console is now generally available Amazon Athena now supports new Lake Formation fine-grained security and reliable table features Announcing Amazon Athena ACID transactions, powered by Apache Iceberg (Preview) Redshift Announcing preview for write queries with Amazon Redshift Concurrency Scaling Amazon Redshift announces native support for SQLAlchemy and Apache Airflow open-source frameworks Amazon Redshift simplifies the use of other AWS services by introducing the default IAM role Announcing Amazon Redshift cross-region data sharing (preview) Announcing preview of SQL Notebooks support in Amazon Redshift Query Editor V2 Neptune Announcing AWS Graviton2-based instances for Amazon Neptune AWS releases open source JDBC driver to connect to Amazon Neptune MemoryDB Amazon MemoryDB for Redis now supports AWS Graviton2-based T4g instances and a 2-month Free Trial Database Migration Service AWS Database Migration Service now supports parallel load for partitioned data to S3 AWS Database Migration Service now supports Kafka multi-topic AWS Database Migration Service now supports Azure SQL Managed Instance as a source AWS Database Migration Service now supports Google Cloud SQL for MySQL as a source Introducing AWS DMS Fleet Advisor for automated discovery and analysis of database and analytics workloads (Preview) AWS Database Migration Service now offers a new console experience, AWS DMS Studio AWS Database Migration Service now supports Time Travel, an improved logging mechanism Other Database Activity Streams now supports Graviton2-based instances Amazon Timestream now offers faster and more cost-effective time series data processing through scheduled queries, multi-measure records, and magnetic storage writes Amazon DynamoDB announces the new Amazon DynamoDB Standard-Infrequent Access table class, which helps you reduce your DynamoDB costs by up to 60 percent Achieve up to 30% better performance with Amazon DocumentDB (with MongoDB compatibility) using new Graviton2 instances S3 Amazon S3 on Outposts now delivers strong consistency automatically for all applications Amazon S3 Lifecycle further optimizes storage cost savings with new actions and filters Announcing the new Amazon S3 Glacier Instant Retrieval storage class - the lowest cost archive storage with milliseconds retrieval Amazon S3 Object Ownership can now disable access control lists to simplify access management for data in S3 Amazon S3 Glacier storage class is now Amazon S3 Glacier Flexible Retrieval; storage price reduced by 10% and bulk retrievals are now free Announcing the new S3 Intelligent-Tiering Archive Instant Access tier - Automatically save up to 68% on storage costs Amazon S3 Event Notifications with Amazon EventBridge help you build advanced serverless applications faster Amazon S3 console now reports security warnings, errors, and suggestions from IAM Access Analyzer as you author your S3 policies Amazon S3 adds new S3 Event Notifications for S3 Lifecycle, S3 Intelligent-Tiering, object tags, and object access control lists Glue AWS Glue DataBrew announces native console integration with Amazon AppFlow AWS Glue DataBrew now supports custom SQL statements to retrieve data from Amazon Redshift and Snowflake AWS Glue DataBrew now allows customers to create data quality rules to define and validate their business requirements FSx Introducing Amazon FSx for OpenZFS Amazon FSx for Lustre now supports linking multiple Amazon S3 buckets to a file system Amazon FSx for Lustre can now automatically update file system contents as data is deleted and moved in Amazon S3 Announcing the next generation of Amazon FSx for Lustre file systems Backup Announcing preview of AWS Backup for Amazon S3 AWS Backup adds support for Amazon Neptune AWS Backup adds support for Amazon DocumentDB (with MongoDB compatibility) AWS Backup provides new resource assignment rules for your data protection policies AWS Backup adds support for VMware workloads Other AWS Lake Formation now supports AWS PrivateLink AWS Transfer Family adds identity provider options and enhanced monitoring capabilities Introducing ability to connect to EMR clusters in different subnets in EMR Studio AWS Snow Family now supports external NTP server configuration Announcing data tiering for Amazon ElastiCache for Redis Now execute python files and notebooks from another notebook in EMR Studio AWS Snow Family launches offline tape data migration capability AI & ML SageMaker Introducing Amazon SageMaker Canvas - a visual, no-code interface to build accurate machine learning models Announcing Fully Managed RStudio on Amazon SageMaker for Data Scientists | AWS News Blog Amazon SageMaker now supports inference testing with custom domains and headers from SageMaker Studio Amazon SageMaker Pipelines now supports retry policies and resume Announcing new deployment guardrails for Amazon SageMaker Inference endpoints Amazon announces new NVIDIA Triton Inference Server on Amazon SageMaker Amazon SageMaker Pipelines now integrates with SageMaker Model Monitor and SageMaker Clarify Amazon SageMaker now supports cross-account lineage tracking and multi-hop lineage querying Introducing Amazon SageMaker Inference Recommender Introducing Amazon SageMaker Ground Truth Plus: Create high-quality training datasets without having to build labeling applications or manage the labeling workforce on your own Amazon SageMaker Studio Lab (currently in preview), a free, no-configuration ML service Amazon SageMaker Studio now enables interactive data preparation and machine learning at scale within a single universal notebook through built-in integration with Amazon EMR Other General Availability of Syne Tune, an open-source library for distributed hyperparameter and neural architecture optimization Amazon Translate now supports AWS KMS Encryption Amazon Kendra releases AWS Single Sign-On integration for secure search Amazon Transcribe now supports automatic language identification for streaming transcriptions AWS AI for data analytics (AIDA) partner solutions Introducing Amazon Lex Automated Chatbot Designer (Preview) Amazon Kendra launches Experience Builder, Search Analytics Dashboard, and Custom Document Enrichment Other Cool Stuff In The Works – AWS Canada West (Calgary) Region | AWS News Blog Unified Search in the AWS Management Console now includes blogs, knowledge articles, events, and tutorials AWS DeepRacer introduces multi-user account management Amazon Pinpoint launches in-app messaging as a new communications channel Amazon AppStream 2.0 Introduces Linux Application Streaming Amazon SNS now supports publishing batches of up to 10 messages in a single API request Announcing usability improvements in the navigation bar of the AWS Management Console Announcing General Availability of Enterprise On-Ramp Announcing preview of AWS Private 5G AWS Outposts is Now Available in Two Smaller Form Factors Introducing AWS Mainframe Modernization - Preview Introducing the AWS Migration and Modernization Competency Announcing AWS Data Exchange for APIs Amazon WorkSpaces introduces Amazon WorkSpaces Web Amazon SQS Enhances Dead-letter Queue Management Experience For Standard Queues Introducing AWS re:Post, a new, community-driven, questions-and-answers service AWS Resource Access Manager enables support for global resource types AWS Ground Station launches expanded support for Software Defined Radios in Preview Announcing Amazon Braket Hybrid Jobs for running hybrid quantum-classical workloads on Amazon Braket Introducing AWS Migration Hub Refactor Spaces - Preview Well-Architected Framework Customize your AWS Well-Architected Review using Custom Lenses New Sustainability Pillar for the AWS Well-Architected Framework IoT Announcing AWS IoT RoboRunner, Now Available in Preview AWS IoT Greengrass now supports Microsoft Windows devices AWS IoT Core now supports Multi-Account Registration certificates on IoT Credential Provider endpoint Announcing AWS IoT FleetWise (Preview), a new service for transferring vehicle data to the cloud more efficiently Announcing AWS IoT TwinMaker (Preview), a service that makes it easier to build digital twins AWS IoT SiteWise now supports hot and cold storage tiers for industrial data New connectivity software, AWS IoT ExpressLink, accelerates IoT development (Preview) AWS IoT Device Management Fleet Indexing now supports two additional data sources (Preview) Connect Amazon Connect now enables you to create and orchestrate tasks directly from Flows Amazon Connect launches scheduled tasks Amazon Connect launches Contact APIs to fetch and update contact details programmatically Amazon Connect launches API to configure security profiles programmatically Amazon Connect launches APIs to archive and delete contact flows Amazon Connect now supports contact flow modules to simplify repeatable logic Sponsors CMD Solutions Silver Sponsors Cevo Versent
On The Cloud Pod this week, the team finds out whose re:Invent 2021 crystal ball was most accurate. Also Graviton3 is announced, and Adam Selipsky gives his first re:Invent keynote. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located. This week's highlights
Are you interested in experimenting with machine learning (ML), but don't know where or how to begin? Now in preview, the new Amazon SageMaker Studio Lab is a free, no-configuration service built to reduce friction when getting started with ML. In this episode, Nicki speaks with Brian Granger, one of the founders of Jupiter Notebooks, about Amazon SageMaker Studio Lab, why we built it, and more. Read the blog: https://go.aws/32ZhJSi Learn more: https://go.aws/3dsJgh9 Sign up for to preview: https://bit.ly/31Lk03a
At its re:Invent conference, Amazon Web Services announced two new initiatives to make machine learning more accessible to anyone interested in learning and experimenting with the technology. The first is a $10 million scholarship programme to make AI and ML education accessible to poor students around the world. It will benefit 2000 students every year. Top 500 students, after a first Udacity nano degree, will get a shot at a second one and mentorship from experts at Amazon and Intel. The second initiative is about increasing access to machine learning through Amazon SageMaker Studio Lab, which gives everyone access to a no-cost version of Amazon SageMaker—an AWS service that helps customers build, train, and deploy machine learning models. “The two initiatives we are announcing today are designed to open up educational opportunities in machine learning to make it more widely accessible to anyone interested in the technology,” said Swami Sivasubramanian, Vice President of Amazon Machine Learning at AWS. US lawmakers have called executives of eight major cryptocurrency firms to testify before a congressional committee on December 8, BBC reports, as the government scrutinises the sector further. Coinbase's Alesia Haas, Circle's Jeremy Allaire and Bitfury's Brian Brooks are among the executives asked to testify, according to BBC. This will be the first time companies representing the controversial sector have been questioned in this way. Square, a fintech company co-founded by Twitter co-founder Jack Dorsey, is changing its name to Block, the company said Wednesday. Dorsey quit Twitter earlier this week, where he was part-time CEO, while also being CEO of Square. “The change to Block acknowledges the company's growth,” the company said in a press release. “Since its start in 2009, the company has added Cash App, TIDAL, and TBD54566975 as businesses, and the name change creates room for further growth. Block is an overarching ecosystem of many businesses united by their purpose of economic empowerment, and serves many people—individuals, artists, fans, developers, and sellers.” Microsoft has released a cheaper, standalone version of its Teams collaboration software, Teams Essentials, aimed at smaller businesses. “We know how difficult the past 20 months have been for small businesses. They've had to demonstrate extreme flexibility to adapt, often with limited access to tools and technology,” Jared Spataro, corporate vice president for Microsoft 365, said in a blogpost. “Teams Essentials is built specifically to meet the unique needs of small businesses, enabling them to thrive in this new era of work. ”Teams Essentials includes group meetings for up to 30 hours, meetings with up to 300 people, and 10GB of cloud storage per user. A subscription will cost $4 in the US, and Rs. 100 in India, per person per month. CRED, a fintech company founded by entrepreneur Kunal Shah, is buying Happay, which provides corporate expense management in India. The acquisition is expected to be a cash and stock deal potentially valuing Happay at approximately $180 million, the company said in a press release. While Happay will operate as a separate entity, the team will work closely with CRED to tap its ecosystem, build distribution, add products and expand the business. Happay's 230-member team will get all the benefits extended to CRED team members, including its ESOP programme, the company said in the release.
On The Cloud Pod this week, the team wishes for time-traveling data. Also, GCP announces Data Lakehouse, Azure hosts Ignite 2021, and Microsoft is out for the metaverse. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located. This week's highlights
In this episode, we talk about Amazon SageMaker and how it can help with ML model development including model building, training and deployment. We cover 3 advantages in each of these 3 areas. We cover points such as: 1. Host ML endpoints for deploying models to thousands or millions of users. 2. Saving costs for model training using SageMaker. 3. Use CloudWatch logs with SageMaker endpoints to debug ML models. 4. Use preconfigured environments or models provided by AWS. 5. Automatically save model artifacts in AWS S3 as you train in SageMaker. 6. Use of version control for SageMaker notebooks with Github. and more… Please rate, subscribe and share this episode with anyone who might find SageMaker useful in their work. I feel that SageMaker is a great tool and want to share about it with data scientists. For comments/feedback/questions or if you think I have missed something in the episode, please reach out to me at LinkedIn: https://www.linkedin.com/in/sanketgupta107/ --- Send in a voice message: https://anchor.fm/the-data-life-podcast/message Support this podcast: https://anchor.fm/the-data-life-podcast/support
The AWS machine learning services are more examples of the newer offerings. Nevertheless, these are growing fast and can help you embrace cutting edge technology. Machine learning is a recent technology in general so the time you spend understanding these services may help you land that next job. Amazon SageMaker This service provides a method for building, training, and deploying machine learning models at any scale. This is a great way to try out machine learning. The time you spend here is good to use on your next resume update. You do need to put some data on S3 to analyze and then check out the use cases. There is a free tier for the first two months. Amazon Comprehend Quick and easy text analysis. Send your text to this service to analyze it for keywords among many other ways to do so. There is a free tier you can use to try it out and find out ways to organize and mine your content. Amazon Lex This service allows you to build voice and chatbots using the technology that drives Alexa. There are some templates, and the interface makes it easy to get started quickly. Amazon Polly If you want to create audio from your content, then this is the service for you. Try out the service a few thousand words at a time for free, and you can even download the audio in mp3 format. Amazon Rekognition The features that Comprehend provides for text is moved into the video world by Rekognition. This service analyzes video and can highlight or recognize people, objects, and other details you might search for in a stream. Amazon Translate This service provides a quick and easy way to translate text between any two languages. Much like Google translate, it is quick and provides an API that you can use to significantly increase your audience. Amazon Transcribe If you have ever wondered about transcribing audio notes (or a podcast), then this is the service for you. It is quick and easy to customize for even highly technical terms. The accuracy varies based on the clarity of the audio and background noise. AWS DeepLens This service is best understood by utilizing the tutorials. It provides a way to analyze videos for objects, faces, and activities. An essential difference between this and the others is that this is a piece of hardware and not just a service. It provides a camera with HD and onboard analysis tools for real-time processing of video. AWS Deep Learning AMIs This service provides quick start machine learning on EC2 through the AMIs. The configuration of a machine learning development environment can be tedious and time-consuming. These AMI options offer a shortcut to get working sooner. Apache MXNet on AWS This is a machine learning framework Apache MXNet is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning. MXNet includes the Gluon interface that allows developers of all skill levels to get started with deep learning on the cloud, on edge devices, and mobile apps. In just a few lines of Gluon code, you can build linear regression, convolutional networks and recurrent LSTMs for object detection, speech recognition, recommendation, and personalization. TensorFlow on AWS This is a machine learning framework on AWS. I think their description works best and avoids any ignorance about it on my end. "TensorFlow™ enables developers to quickly and easily get started with deep learning in the cloud. The framework has broad support in the industry and has become a popular choice for deep learning research and application development, particularly in areas such as computer vision, natural language understanding, and speech translation. You can get started on AWS with a fully-managed TensorFlow experience with Amazon SageMaker, a platform to build, train, and deploy machine learning models at scale. Or, you can use the AWS Deep Learning AMIs to build custom environments and workflows with TensorFlow and other popular frameworks including Apache MXNet, PyTorch, Caffe, Caffe2, Chainer, Gluon, Keras, and Microsoft Cognitive Toolkit."