POPULARITY
Send us a textIn this episode of What's New in Cloud Phenops, Stephen Old and Frank discuss the latest updates in cloud computing, focusing on Azure, Google Cloud, and AWS. They cover the retirement of certain Azure virtual machines, the introduction of serverless GPUs, and the benefits of Amazon Bedrock for cost transparency. The conversation also touches on new features for Azure databases, insights from a Forrester study on Spanner, and the importance of calculating AI costs. Additionally, they discuss licensing changes for Amazon FSX, tiered storage for Spanner, and the deprecation of the AWS connector to Azure. The episode concludes with a look at sustainability efforts and upcoming events in the cloud computing space.takeawaysServerless GPUs enable on-demand AI workloads with automatic scaling.Amazon Bedrock introduces real-time cost transparency for custom models.Physical snapshots for Azure databases enhance backup flexibility.Forrester study shows significant ROI with Spanner.Understanding AI costs on Google Cloud is crucial for budgeting.Amazon FSX for NetApp removes SnapLock licensing fees.Tiered storage for Spanner optimizes cost and performance.AWS connector to Azure is deprecated, focusing on native solutions.Azure OpenAI service offers discounts for provisioned reservations.
Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX. Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today's episode, we're going to focus on integrating APEX with OCI AI Services. Lois: That's right, Niki. We're going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we've got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today's episode, we'll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let's start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app. You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let's shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it's all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started. 04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework? Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems. Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That's fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we're waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let's talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps? Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok. What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location. Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let's switch to the big topic everyone's excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful? Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps? Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database. 17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability. 17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question. Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc. 19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you're ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You'll find step-by-step guides and demos for everything we covered today. Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Send us a textJanuary 2025 FinOps news.Instances/Computehttps://cloud.google.com/blog/products/compute/announcing-smaller-machine-types-for-a3-high-vms - The A3 Ultra machine type is available in the following region and zone: St. Ghislain, Belgium, Europe - europe-west1-bhttps://cloud.google.com/blog/products/compute/first-google-axion-processor-c4a-now-ga-with-titanium-ssd - Google Axion Processor-based C4A VMs with Titanium SSD are now generally available. Part of our general-purpose machine family, these instances come with up to 6 TiB of Titanium SSD disks. Titanium SSD is our latest generation of Local SSD. It uses Titanium I/O offload processing and offers enhanced SSD security, performance, and management.Compute Engine - Generally available: Managed instance groups (MIGs) let you create pools of suspended and stopped virtual machine (VM) instances. You can manually suspend and stop VMs in a MIG to save on costs, or use suspended and stopped pools to speed up scale out operations of your MIG. For more information, see About suspending and stopping VMs in a MIG.Data/DBs/AIAzure OpenAI provisioned reservations (June - but not sure we mentioned it)With the introduction of 1-month and 1-year Provisioned Reservations, businesses that commit to using Azure OpenAI on a scale can enjoy substantial discounts compared to the standard pay-as-you-go pricing. You can manage these reservations in Cost analysis in the same way as any other reservation.For further details on how to purchase and manage these reservations, please refer to the article below: Save costs with Microsoft Azure OpenAI Service Provisioned Reservations Storagehttps://aws.amazon.com/about-aws/whats-new/2025/01/amazon-s3-metadata-generally-available Generally available: Azure NetApp Files now supports minimum volume size of 50 GiB.Generally Available: Azure Files provisioned v2 billing model for HDD (Standard) - (Added to Roadmap 8th Jan 2025 - but released in October 24) The provisioned v2 model for Azure Files HDD (standard) pairs predictability of total cost of ownership with flexibility, allowing you to create a file share that meets your exact storage and performance requirements. Provisioned v2 shares enable independent provisioning of storage, IOPS, and throughput. In addition to predictable pricing and flexible provisioning, provisioned v2 also enables increased scale and performance, up to 256 TiB, 50,000 IOPS, and 5 GiB/sec of throughput; and per share monitoring. Provisioned v2 is generally available in a subset of regions. For the current list of available regions, see provisioned v2 availability.Visibility (Billing conductor - Tags - cost categories)New fields for cost allocation (Enterprise Agreement customers)You can use different le
In episode 229 of our SAP on Azure video podcast we talk again about AI and SAP.The last few months have seen an amazing evolution in AI. What started as a simple Bot Framework, evolved to GPT driven Chatbots, to Copilot to users interacting with agents. The latest step -- like what we are doing together with SAP on the Joule and Copilot integration -- is multi-agent integration. Chan Jin Park, CJ, was on our show twice to talk about his scenarios and demos leveraging Azure OpenAI, Teams and Copilot integrations. I am glad to have him back to show us his latest developments. Find all the links mentioned here: https://www.saponazurepodcast.de/episode229Reach out to us for any feedback / questions:* Robert Boban: https://www.linkedin.com/in/rboban/* Goran Condric: https://www.linkedin.com/in/gorancondric/* Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #AI #AzureAI
Design, customize and manage your own custom applications with Azure AI Foundry right from your code. With Azure AI Foundry, leverage over 1,800 models, seamlessly integrating them into your coding environment to create agents and tailored app experiences. Utilize Retrieval Augmented Generation and vector search to enrich responses with contextual information, as well with built-in services to incorporate cognitive skills such as language, vision, and safety detection. Dan Taylor, Principal Product Architect for Azure AI Foundry SDK, also shares how to streamline your development process with tools for orchestration and monitoring. Use templates to simplify resource deployment and run evaluations against large datasets to optimize performance. With Application Insights, gain visibility into your app's metrics, enabling data-driven decisions for continuous improvement. ► QUICK LINKS: 00:00 - Manage AI apps in Azure AI Foundry 00:27 - What's in the Azure AI Foundry 01:37 - Ground apps using your own data 03:03 - Retrieval Augmented Generation 03:48 - Using vector search 04:17 - Set up your coding environment 06:11 - How to build AI apps in code 07:16 - Using traces 07:45 - Evaluating performance against large data sets 08:19 - Options for monitoring your AI apps 08:58 - Wrap up ► Link References Find the Azure AI Foundry at https://ai.azure.com Check out our code samples at https://aka.ms/AIAppTemplates ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
A draft cybersecurity executive order from the Biden administration seeks to bolster defenses. Researchers identify a “mass exploitation campaign” targeting Fortinet firewalls. A Chinese-language illicit online marketplace is growing at an alarming rate. CISA urges patching of a second BeyondTrust vulnerability. The UK proposes banning ransomware payments by public sector and critical infrastructure organizations. A critical flaw in Google's authentication flow exposes millions to unauthorized access.OWASP releases its first Non-Human Identities (NHI) Top 10. A Microsoft lawsuit targets individuals accused of bypassing safety controls in its Azure OpenAI tools. Our guest is Chris Pierson, Founder and CEO of BlackCloak, discussing digital executive protection. The feds remind the health care sector that AI must first do no harm. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest is Chris Pierson, Founder and CEO of BlackCloak, discussing digital executive protection. Selected Reading Second Biden cyber executive order directs agency action on fed security, AI, space (CyberScoop) Snoops exploited Fortinet firewalls with 'probable' 0-day (The Register) The ‘Largest Illicit Online Marketplace' Ever Is Growing at an Alarming Rate, Report Says (WIRED) CISA Warns of Second BeyondTrust Vulnerability Exploited in Attacks (SecurityWeek) UK Considers Ban on Ransomware Payments by Public Bodies (Infosecurity Magazine) Google OAuth "Sign in with Google" Vulnerability Exposes Millions of Accounts to Data Theft (Cyber Security News) OWASP Publishes First-Ever Top 10 “Non-Human Identities (NHI) Security Risks (Cyber Security News) Microsoft Sues Harmful Fake AI Image Crime Ring (GovInfo Security) Feds Tell Health Sector to Watch for Bias in AI Decisions (BankInfo Security) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
See the latest innovations in silicon design from AMD with new system-on-a-chip high bandwidth memory breakthroughs with up to 7 terabytes of memory bandwidth in a single virtual machine - and how it's possible to get more than 8x speed-ups without sacrificing compatibility from the previous generation to HBv5. These use AMD EPYC™ 9004 Processors with AMD 3D V-Cache™ Technology. And find out how Microsoft's own silicon including custom ARM-based Cobalt CPUs and Maia AI accelerators for performance and power efficiency. Mark Russinovich, Azure CTO, Deputy CISO, Technical Fellow, and Microsoft Mechanics lead contributor, shows how with workloads spanning Databricks, Siemens, Snowflake, or Microsoft Teams, Azure provides the tools to improve efficiency and performance in your datacenter at hyperscale. ► QUICK LINKS: 00:00 - 7TB memory bandwidth in a single VM 00:51 - Efficiency and optimization 02:33 - Choose the right hardware for workloads 04:52 - Microsoft Cobalt CPUs and Maia AI accelerators 06:14 - Hardware innovation for diverse workloads 07:53 - Speedups with HBv5 VMs 09:04 - Compatibility moving from HBv4 to HBv5 11:29 - Future of HPC 12:01 - Wrap up ► Link References Check out https://aka.ms/AzureHPC For more about HBv5 go to https://aka.ms/AzureHBv5 ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Cale and Sujit discuss their current projects in Azure as 2024 comes to a close. They also cover a ton of AKS updates. Semantic Kernel makes it easier for developers to build Azure Open AI applications that can also include SLMs like Phi-4. Azure has many options to use File Shares and Volumes, and we walk through the process of figuring out which one is right for your needs. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode511.mp3 YouTube: https://youtu.be/dLfCJ6btKng Resources: Semantic Kernel - https://github.com/microsoft/semantic-kernel Journey with SK on OpenAI and AzureOpenAI Ollama (running SLM local) - https://github.com/ollama/ollama Ollamazure (running SLM that looks like Azure OpenAI) - https://github.com/sinedied/ollamazure PhiSilica - https://learn.microsoft.com/en-us/windows/ai/apis/phi-silica File Shares: https://learn.microsoft.com/en-us/azure/storage/common/storage-introduction Other updates: Lots of AKS updates! https://learn.microsoft.com/en-us/azure/aks/concepts-network-isolated https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/availability-performance/container-image-pull-performance https://learn.microsoft.com/en-us/azure/aks/imds-restriction https://learn.microsoft.com/en-us/azure/aks/use-windows-gpu https://azure.microsoft.com/en-us/updates/?id=471295 https://learn.microsoft.com/en-us/azure/backup/tutorial-restore-aks-backups-across-regions https://learn.microsoft.com/en-us/azure/aks/app-routing-nginx-configuration?tabs=azurecli#control-the-default-nginx-ingress-controller-configuration-preview https://learn.microsoft.com/en-us/azure/aks/automated-deployments https://learn.microsoft.com/en-us/azure/aks/aks-extension-ghcopilot-plugins https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-logs-schema#kubernetes-metadata-and-logs-filtering
In this episode of Exploring Artificial Intelligence in Oncology, Wanmei Ou, PhD, the Vice President of Product, Clinical Decision Support, & Analytics Data Platform at Ontada, shares more about Ontada's collaboration with Microsoft to transform unstructured oncology data with Azure OpenAI technology. Her discussion with Dr. Waqas Haque highlights how AI is transforming clinical workflows, reducing documentation burdens, and driving improved patient outcomes.
Omnicom adquire Interpublic e consolida maior grupo publicitário global A Omnicom confirmou a aquisição da Interpublic por US$ 13,25 bilhões, criando a maior holding publicitária do mundo. A fusão une gigantes do setor em um movimento que redesenha a indústria de publicidade, combinando forças criativas e tecnológicas com um portfólio global ampliado. Juntas, as empresas atenderão grandes marcas com maior eficiência operacional e sinergia estratégica, respondendo à crescente demanda por dados e inovação digital. Com essa consolidação, a Omnicom assume liderança no mercado e desafia concorrentes a se adaptarem ao novo cenário competitivo. Fontes: AdExchanger, NeoFeed, Meio & Mensagem, UOL Economia e Valor Econômico Receita global de anúncios ultrapassa US$ 1 trilhão em 2024 A marca histórica de US$ 1 trilhão em 2024, foi atingida impulsionada por avanços em anúncios digitais, especialmente em dispositivos móveis e plataformas de streaming. Abaixo os principais highlights: > Mais da metade dos investimentos serão destinados às cinco maiores empresas de publicidade digital – Google, Meta, ByteDance, Amazon e Alibaba. > A publicidade digital representará 81,7% da receita em 2025, incluindo receitas provenientes de TV por streaming, mídia digital out-of-home e mobile advertising. > A receita total de TV deve crescer apenas 1,9% em 2025, alcançando US$ 169,1 bilhões, sendo que a TV por streaming deve crescer 12,9% enquanto a TV linear terá uma queda de 3,4%. > Porém, a TV linear representará 72,6% da receita total de TV em 2025 (cerca de US$ 122 bilhões), enquanto o streaming representará apenas 37,5% até 2029. > Após um crescimento de 18,2% neste ano, a receita global de anúncios em mídia de varejo deve crescer mais 13,8%, alcançando US$ 176,9 bilhões em 2025, superando pela primeira vez a receita da TV. Apple recorre de decisão no Brasil sobre taxas da App Store A Apple apresentou recurso contra uma decisão judicial no Brasil que desafia as taxas cobradas pela App Store, atualmente em 30% para desenvolvedores em transações realizadas dentro da plataforma. A decisão judicial segue o crescente escrutínio global sobre práticas consideradas anticompetitivas por grandes empresas de tecnologia. No Brasil, órgãos reguladores e desenvolvedores argumentam que as taxas criam barreiras para pequenas empresas e startups, limitando a competitividade. A Apple defende a política como essencial para manter a segurança e a qualidade da plataforma. Este caso reforça o movimento internacional por maior regulação das gigantes do setor, com potenciais implicações para o modelo de negócios da empresa em mercados emergentes. Visa lança solução para gestão financeira de PMEs no Brasil Em parceria com a startup Celero, a Visa apresentou um serviço voltado para pequenas e médias empresas no Brasil, oferecendo ferramentas para digitalizar pagamentos e otimizar o controle de fluxo de caixa. A solução busca atender à crescente demanda por gestão financeira simplificada, especialmente com o aumento da digitalização entre as PMEs. Rappi lança novo app com foco em IA, personalização de serviços e funções sociais O novo aplicativo terá com funcionalidades que tem como foco a integração de inteligência artificial, melhorias em recomendações personalizadas, maior engajamento e maior eficiência em entregas. A integração com AI Generativa otimizará as experiências de busca, recomendação e chat, com respostas direcionadas e personalizadas a cada pessoa e cada contexto. A plataforma trará também funções sociais permitindo aos usuários recomendar restaurantes e seguir amigos e influenciadores gastronômicos favoritos. Além disso, uma parceria estratégica com mais de 200 influenciadores da região trará recomendações exclusivas, promovendo uma experiência mais interativa e comunitária. Reddit anuncia IA que responde com base em discussões de usuários O Reddit revelou uma nova ferramenta de inteligência artificial capaz de gerar respostas contextualizadas a partir de discussões e comentários feitos pelos próprios usuários. A tecnologia visa otimizar o engajamento nas comunidades, fornecendo informações relevantes diretamente nas conversas, mas levanta questões sobre moderação e privacidade. Essa forma de implementar AI pode ser utilizada por diversos ecommerces e marketplaces como forma de utilizar os próprios reviews como forma de gerar respostas para dúvidas dos usuários. OpenAI lança Sora, ferramenta de geração de vídeos por IA A OpenAI lançou oficialmente o Sora, sua ferramenta de geração de vídeos a partir de prompts textuais. As primeiras análises destacam a qualidade visual impressionante e a facilidade de uso, permitindo que criadores de conteúdo produzam vídeos curtos de maneira eficiente. No entanto, a plataforma ainda apresenta limitações, como baixa personalização em narrativas complexas. O Sora posiciona a OpenAI como líder no uso criativo de IA. Contudo… Sora, gerador de vídeos da OpenAI, enfrenta restrições na União Europeia O Sora, nova ferramenta de geração de vídeos por IA da OpenAI, pode não ser lançado na União Europeia devido a restrições regulatórias relacionadas à Lei de Serviços Digitais (DSA). O Sora permite criar vídeos completos a partir de comandos textuais, mas os desafios com privacidade e transparência na região podem atrasar sua chegada. Este cenário destaca as dificuldades enfrentadas por empresas de IA ao lidarem com regras rigorosas de proteção de dados na UE, mesmo em ferramentas inovadoras. Meta apresenta Llama 3, modelo de IA mais eficiente e avançado A Meta anunciou o Llama 3, sua nova geração de modelos de IA, prometendo maior eficiência energética e melhor desempenho em aplicações práticas. A novidade é parte do esforço da empresa em liderar o desenvolvimento de IA generativa no mercado global. Blip e Microsoft ampliam integração em soluções de IA conversacional A Blip, plataforma brasileira de comunicação por chatbots, anunciou uma parceria estratégica com a Microsoft para expandir a integração de suas soluções com o Azure OpenAI. O objetivo é oferecer experiências mais inteligentes e personalizadas em atendimentos virtuais, fortalecendo o ecossistema de IA conversacional no mercado brasileiro.See omnystudio.com/listener for privacy information.
Omnicom adquire Interpublic e consolida maior grupo publicitário global A Omnicom confirmou a aquisição da Interpublic por US$ 13,25 bilhões, criando a maior holding publicitária do mundo. A fusão une gigantes do setor em um movimento que redesenha a indústria de publicidade, combinando forças criativas e tecnológicas com um portfólio global ampliado. Juntas, as empresas atenderão grandes marcas com maior eficiência operacional e sinergia estratégica, respondendo à crescente demanda por dados e inovação digital. Com essa consolidação, a Omnicom assume liderança no mercado e desafia concorrentes a se adaptarem ao novo cenário competitivo. Fontes: AdExchanger, NeoFeed, Meio & Mensagem, UOL Economia e Valor Econômico Receita global de anúncios ultrapassa US$ 1 trilhão em 2024 A marca histórica de US$ 1 trilhão em 2024, foi atingida impulsionada por avanços em anúncios digitais, especialmente em dispositivos móveis e plataformas de streaming. Abaixo os principais highlights: > Mais da metade dos investimentos serão destinados às cinco maiores empresas de publicidade digital – Google, Meta, ByteDance, Amazon e Alibaba. > A publicidade digital representará 81,7% da receita em 2025, incluindo receitas provenientes de TV por streaming, mídia digital out-of-home e mobile advertising. > A receita total de TV deve crescer apenas 1,9% em 2025, alcançando US$ 169,1 bilhões, sendo que a TV por streaming deve crescer 12,9% enquanto a TV linear terá uma queda de 3,4%. > Porém, a TV linear representará 72,6% da receita total de TV em 2025 (cerca de US$ 122 bilhões), enquanto o streaming representará apenas 37,5% até 2029. > Após um crescimento de 18,2% neste ano, a receita global de anúncios em mídia de varejo deve crescer mais 13,8%, alcançando US$ 176,9 bilhões em 2025, superando pela primeira vez a receita da TV. Apple recorre de decisão no Brasil sobre taxas da App Store A Apple apresentou recurso contra uma decisão judicial no Brasil que desafia as taxas cobradas pela App Store, atualmente em 30% para desenvolvedores em transações realizadas dentro da plataforma. A decisão judicial segue o crescente escrutínio global sobre práticas consideradas anticompetitivas por grandes empresas de tecnologia. No Brasil, órgãos reguladores e desenvolvedores argumentam que as taxas criam barreiras para pequenas empresas e startups, limitando a competitividade. A Apple defende a política como essencial para manter a segurança e a qualidade da plataforma. Este caso reforça o movimento internacional por maior regulação das gigantes do setor, com potenciais implicações para o modelo de negócios da empresa em mercados emergentes. Visa lança solução para gestão financeira de PMEs no Brasil Em parceria com a startup Celero, a Visa apresentou um serviço voltado para pequenas e médias empresas no Brasil, oferecendo ferramentas para digitalizar pagamentos e otimizar o controle de fluxo de caixa. A solução busca atender à crescente demanda por gestão financeira simplificada, especialmente com o aumento da digitalização entre as PMEs. Rappi lança novo app com foco em IA, personalização de serviços e funções sociais O novo aplicativo terá com funcionalidades que tem como foco a integração de inteligência artificial, melhorias em recomendações personalizadas, maior engajamento e maior eficiência em entregas. A integração com AI Generativa otimizará as experiências de busca, recomendação e chat, com respostas direcionadas e personalizadas a cada pessoa e cada contexto. A plataforma trará também funções sociais permitindo aos usuários recomendar restaurantes e seguir amigos e influenciadores gastronômicos favoritos. Além disso, uma parceria estratégica com mais de 200 influenciadores da região trará recomendações exclusivas, promovendo uma experiência mais interativa e comunitária. Reddit anuncia IA que responde com base em discussões de usuários O Reddit revelou uma nova ferramenta de inteligência artificial capaz de gerar respostas contextualizadas a partir de discussões e comentários feitos pelos próprios usuários. A tecnologia visa otimizar o engajamento nas comunidades, fornecendo informações relevantes diretamente nas conversas, mas levanta questões sobre moderação e privacidade. Essa forma de implementar AI pode ser utilizada por diversos ecommerces e marketplaces como forma de utilizar os próprios reviews como forma de gerar respostas para dúvidas dos usuários. OpenAI lança Sora, ferramenta de geração de vídeos por IA A OpenAI lançou oficialmente o Sora, sua ferramenta de geração de vídeos a partir de prompts textuais. As primeiras análises destacam a qualidade visual impressionante e a facilidade de uso, permitindo que criadores de conteúdo produzam vídeos curtos de maneira eficiente. No entanto, a plataforma ainda apresenta limitações, como baixa personalização em narrativas complexas. O Sora posiciona a OpenAI como líder no uso criativo de IA. Contudo… Sora, gerador de vídeos da OpenAI, enfrenta restrições na União Europeia O Sora, nova ferramenta de geração de vídeos por IA da OpenAI, pode não ser lançado na União Europeia devido a restrições regulatórias relacionadas à Lei de Serviços Digitais (DSA). O Sora permite criar vídeos completos a partir de comandos textuais, mas os desafios com privacidade e transparência na região podem atrasar sua chegada. Este cenário destaca as dificuldades enfrentadas por empresas de IA ao lidarem com regras rigorosas de proteção de dados na UE, mesmo em ferramentas inovadoras. Meta apresenta Llama 3, modelo de IA mais eficiente e avançado A Meta anunciou o Llama 3, sua nova geração de modelos de IA, prometendo maior eficiência energética e melhor desempenho em aplicações práticas. A novidade é parte do esforço da empresa em liderar o desenvolvimento de IA generativa no mercado global. Blip e Microsoft ampliam integração em soluções de IA conversacional A Blip, plataforma brasileira de comunicação por chatbots, anunciou uma parceria estratégica com a Microsoft para expandir a integração de suas soluções com o Azure OpenAI. O objetivo é oferecer experiências mais inteligentes e personalizadas em atendimentos virtuais, fortalecendo o ecossistema de IA conversacional no mercado brasileiro.See omnystudio.com/listener for privacy information.
Millions of people use Azure AI Search every day without knowing it. You can enable your apps with the same search that enables retrieval-augmented generation (RAG) capabilities when you build Custom GPTs or attach files in your ChatGPT prompts. Pablo Castro, Microsoft CVP and Distinguished Engineer Azure AI Search, joins Jeremy Chapman to share how with Azure AI Search, you can create custom applications that retrieve the most relevant information quickly and accurately, even from billions of records. Manage massive-scale datasets while maintaining high-quality search results with ultra-compact, binary quantized vector search indexes that use Matryoshka Representation Learning (MRL) and oversampling to equal the search accuracy of vector indexes up to 96 times larger. These approaches drive significant cost savings by optimizing your vector indexes without compromising quality. ► QUICK LINKS: 00:00 - RAG powered by Azure AI Search 00:50 - Azure AI Search role in ChatGPT 02:01 - Azure AI Search use case - AT&T 03:27 - Start in Azure Portal 04:35 - Massive scale and vector index 06:08 - Scalar & Binary Quantization 07:21 - Martyoshka technique 09:07 - Oversampling 11:31 - How to build an app using Azure AI Search 13:00 - See it in action 14:28 - Enable binary quantization with oversampling 14:54 - Wrap up ► Link References Get sample code on GitHub at https://aka.ms/SearchQuantizationSample Check out search solutions at https://aka.ms/AzureAISearch ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their initiative to assess physician performance, guiding patients toward top-performing providers.Integrating AI Responsibly: A key theme throughout the conversation is the responsible integration of generative AI into healthcare. Steve Dufour details how Embold Health developed a virtual assistant using Azure OpenAI, ensuring users receive informed healthcare recommendations without long-term storage of sensitive data.Assessment Through Rigorous Standards: Haydock and Dufour also highlight the importance of ensuring data privacy and compliance with security standards, from conducting penetration tests to implementing HITRUST assessments. Their approach underscores the need to prioritize security throughout product development, rather than as an afterthought.Navigating Risk and Compliance: The conversation touches on risk management and compliance, with both speakers emphasizing the importance of aligning AI initiatives with business objectives and risk tolerance. A strong risk assessment framework is essential for maintaining trust and security in AI-enabled applications.Conclusion: This in-depth discussion not only outlines a responsible approach to incorporating AI into healthcare but also showcases the power of collaboration in driving innovation. Sean Martin concludes with a call to embrace secure, impactful technologies that enhance healthcare services and improve outcomes.Learn more about HITRUST: https://itspm.ag/itsphitwebNote: This story contains promotional content. Learn more.Guests: Walter Haydock, Founder and CEO, StackAwareOn LinkedIn | https://www.linkedin.com/in/walter-haydock/Steve Dufour, Chief Security & Privacy Officer, Embold HealthOn LinkedIn | https://www.linkedin.com/in/swdufour/ResourcesLearn more and catch more stories from HITRUST: https://www.itspmagazine.com/directory/hitrustView all of our HITRUST Collaborate 2024 coverage: https://www.itspmagazine.com/hitrust-collaborate-2024-information-risk-management-and-compliance-event-coverage-frisco-texasAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their initiative to assess physician performance, guiding patients toward top-performing providers.Integrating AI Responsibly: A key theme throughout the conversation is the responsible integration of generative AI into healthcare. Steve Dufour details how Embold Health developed a virtual assistant using Azure OpenAI, ensuring users receive informed healthcare recommendations without long-term storage of sensitive data.Assessment Through Rigorous Standards: Haydock and Dufour also highlight the importance of ensuring data privacy and compliance with security standards, from conducting penetration tests to implementing HITRUST assessments. Their approach underscores the need to prioritize security throughout product development, rather than as an afterthought.Navigating Risk and Compliance: The conversation touches on risk management and compliance, with both speakers emphasizing the importance of aligning AI initiatives with business objectives and risk tolerance. A strong risk assessment framework is essential for maintaining trust and security in AI-enabled applications.Conclusion: This in-depth discussion not only outlines a responsible approach to incorporating AI into healthcare but also showcases the power of collaboration in driving innovation. Sean Martin concludes with a call to embrace secure, impactful technologies that enhance healthcare services and improve outcomes.Learn more about HITRUST: https://itspm.ag/itsphitwebNote: This story contains promotional content. Learn more.Guests: Walter Haydock, Founder and CEO, StackAwareOn LinkedIn | https://www.linkedin.com/in/walter-haydock/Steve Dufour, Chief Security & Privacy Officer, Embold HealthOn LinkedIn | https://www.linkedin.com/in/swdufour/ResourcesLearn more and catch more stories from HITRUST: https://www.itspmagazine.com/directory/hitrustView all of our HITRUST Collaborate 2024 coverage: https://www.itspmagazine.com/hitrust-collaborate-2024-information-risk-management-and-compliance-event-coverage-frisco-texasAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
Send me a Text Message hereFULL SHOW NOTES https://podcast.nz365guy.com/610 Francis Msangi Masera, a passionate Chief Innovation Officer from Nairobi, Kenya, takes us on a fascinating journey through his career in the tech world. From his initial spark of interest in computers during high school to becoming a Microsoft MVP in 2024, Francis' story is one of dedication and innovation. We explore his transition from being a C-sharp developer to specializing in Navision and Business Central, emphasizing his impactful role in implementing these solutions locally and internationally. With the rising popularity of Business Central and Power Platform in Kenya, especially among SMEs and large organizations, Francis provides valuable insights and shares a glimpse of Nairobi's vibrant culture, including the culinary delight of Nyama Choma.Our conversation shifts to the exciting integration of AI tools like Microsoft's Copilot in Business Central, and how Azure OpenAI service is enhancing functionalities for businesses. Francis sheds light on the importance of prompt engineering in optimizing these technologies, offering practical examples like invoicing with specific item queries. He also shares strategic tips on becoming a Microsoft MVP, emphasizing the significance of community contributions and effective showcasing of work. His experiences, including speaking at events like Experts Live Kenya, highlight his commitment to community engagement and knowledge sharing. Join us to discover how Francis's expertise is helping businesses unleash the full potential of ERP platforms through AI advancements.OTHER RESOURCES: Microsoft MVP YouTube Series - How to Become a Microsoft MVP 90 Day Mentoring Challenge - https://ako.nz365guy.com/Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
With more than 1700 models to choose from on Azure, selecting the right one is key to enabling the right capabilities, at the right price point, and with the right protections in place. That's where the Azure AI model catalog and model benchmarks can help. With Azure AI, you can seamlessly integrate powerful GenAI models into your app development process, making your applications smarter, more efficient, and highly scalable. Access a vast selection of AI models, from sophisticated large language models to efficient small models that can run offline. Matt McSpirit, Microsoft Azure expert, shows how to compare and select the right AI model for your specific needs. Azure AI's model benchmarks evaluate models on accuracy, coherence, groundedness, fluency, relevance, and similarity. Experiment with different models in Azure AI Studio or your preferred coding environment, and optimize costs with serverless pricing options. ► QUICK LINKS: 00:00 - Build GenAI powered apps 00:53 - Model choice 02:11 - Use your environments of choice 02:44 - Choose the right AI model 05:28 - Compare models 08:04 - Wrap up ► Link References Get started at https://ai.azure.com See data, privacy, and security for use of models at https://aka.ms/AzureAImodelcontrols ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics • Enjoy us on Instagram: https://www.instagram.com/msftmechanics • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Build your own custom applications with Azure AI right from your code. With Azure AI, leverage over 1,700 models, seamlessly integrating them into your coding environment to create tailored app experiences. Utilize features like Retrieval Augmented Generation and vector search to enrich responses with contextual information, as well as prebuilt Azure AI services to incorporate cognitive skills such as language, vision, and safety detection. Dan Taylor, Principal Product Architect for Microsoft Azure AI, also shares how to streamline your development process with tools for orchestration and monitoring. Use templates to simplify resource deployment and run evaluations against large datasets to optimize performance. With Application Insights, gain visibility into your app's metrics, enabling data-driven decisions for continuous improvement. ► QUICK LINKS: 00:00 - Build custom AI apps with the studio in Azure AI 00:27 - Leverage the studio in Azure AI 01:37 - Build apps grounded on custom data 03:03 - Retrieval Augmented Generation 03:48 - Vector search 04:17 - Set up your coding environment 06:11 - How to build in code 07:16 - Traces 07:45 - Evaluate performance against large data set 08:19 - Options for monitoring 08:58 - Wrap up ► Link References To get started, go to https://ai.azure.com Check out our code samples at https://aka.ms/AIAppTemplates ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Welcome to the Tearsheet Podcast, where we explore financial services together with an eye on technology, innovation, emerging models, and changing expectations. I'm Tearsheet's editor in chief, Zack Miller. Today, we're joined by Bill Borden, Corporate Vice President, Worldwide Financial Services, at Microsoft, and Suzanne Dann, CEO for the Americas at Wipro. Together, they discuss their collaboration on leveraging Azure OpenAI to enhance generative AI in finance. This partnership focuses on improving customer experiences, streamlining processes, and ensuring responsible AI practices in the financial industry. As Suzanne puts it, “My role is to help clients digitally transform by bringing together the right industry expertise, technology, and integration experience.” Bill adds, “Our goal at Microsoft is to build products and services that truly meet the unique needs of financial institutions.” We'll explore how cognitive assistants, powered by generative AI, are reshaping customer interactions, loan origination, and even the broker experience, all while maintaining a focus on security, reliability, and expanding AI access across the sector.
Microsoft Fabric seamlessly integrates with generative AI to enhance data-driven decision-making across your organization. It unifies data management and analysis, allowing for real-time insights and actions. With Real Time Intelligence, keeping grounding data for large language models (LLMs) up-to-date is simplified. This ensures that generative AI responses are based on the most current information, enhancing the relevance and accuracy of outputs. Microsoft Fabric also infuses generative AI experiences throughout its platform, with tools like Copilot in Fabric and Azure AI Studio enabling easy connection of unified data to sophisticated AI models. ► QUICK LINKS: 00:00 - Unify data with Microsoft Fabric 00:35 - Unified data storage & real-time analysis 01:08 - Security with Microsoft Purview 01:25 - Real-Time Intelligence 02:05 - Integration with Azure AI Studio ► Link References This is Part 3 of 3 in our series on leveraging generative AI. Watch our playlist at https://aka.ms/GenAIwithAzureDBs ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
How do you manage APIs to GenAI, and how can GenAI help with API management? Carl and Richard chat with Andrei Kamenev about the latest features coming to Azure API Management. On the one hand, there are Copilot tools to help craft and understand APIM policies, which can get very complex. Then, there is the provisioning of access to GenAI-related APIs like the Azure OpenAI service, which utilize tokens - and those tokens mean money, so they need to be controlled. The GenAI Gateway provides the ability to rate-limit token issuing and all the other capabilities you expect from APIM. Prompt caching is in preview and can decrease the cost of repeated use of the same prompts. Many of the features are new, and more are coming!
How do you manage APIs to GenAI, and how can GenAI help with API management? Carl and Richard chat with Andrei Kamenev about the latest features coming to Azure API Management. On the one hand, there are Copilot tools to help craft and understand APIM policies, which can get very complex. Then, there is the provisioning of access to GenAI-related APIs like the Azure OpenAI service, which utilize tokens - and those tokens mean money, so they need to be controlled. The GenAI Gateway provides the ability to rate-limit token issuing and all the other capabilities you expect from APIM. Prompt caching is in preview and can decrease the cost of repeated use of the same prompts. Many of the features are new, and more are coming!
See how you can use your SQL data as a robust backend for AI applications with semantic search and retrieval augmented generation. Azure SQL Database along with any SQL servers you connect to Azure integrate seamlessly with Azure AI Search and the Azure OpenAI Service, enabling natural language search capabilities within your SQL data. Use this approach to enhance e-commerce platforms, financial data analysis, or CRM systems while using SQL's granular access management controls to ensure that data searches and generated responses are limited to specific user permissions. ► QUICK LINKS: 00:00 - Generative AI for your SQL workloads 00:33 - Retrieval Augmented Generation 01:00 - Granular access management 01:15 - Enable natural language search 01:54 - Connect vectorized SQL data 02:20 - Test generative AI responses 02:41 - Wrap up ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Are you ready to unlock the full potential of AI in accounting and finance? How can Azure OpenAI revolutionize the way financial professionals work?In this special bonus episode of the Accounting Influencers podcast, Rob Brown explores the world of artificial intelligence with Microsoft experts Jas Brar and Matt Quinn. The conversation explores how AI, particularly Azure OpenAI, is transforming the accounting and finance sector, offering new opportunities for efficiency, insight, and value creation.Jas and Matt discuss the evolution of AI from traditional rule-based systems to more sophisticated generative AI, emphasizing how these technologies can augment human capabilities rather than replace them. They address common concerns about data security and privacy, explaining how Azure OpenAI provides a secure environment for businesses to leverage AI while maintaining control over their sensitive information. The experts also offer practical advice on how accounting professionals can start incorporating AI into their work, from using productivity tools like Copilot to partnering with IT teams for more complex implementations.Key Takeaways:✓ AI is evolving from rule-based systems to more sophisticated, context-aware models ✓ Azure OpenAI offers secure, private AI capabilities for businesses ✓ AI can help accountants shift from data crunching to more strategic, value-added activities ✓ Starting with productivity tools like Copilot can help professionals become comfortable with AI ✓ Successful AI implementation requires considering technology, processes, and people Quotes: "I'd rather flip that on its head and that way, you know, I'm more confident in the decision we would want to make based on spending more time getting to the roots of different scenarios on the table." - Jas Brar "Always start with why, right? How do you think about what, why would I use this technology? It's not technology for technology's sake." - Matt Quinn◣━━━━━━━━━━━━━━━━━━━━◢The Accounting Influencers Podcast, hosted by Rob Brown, is one of the world's leading shows for accounting leaders, professionals, finance specialists, software vendors, tech providers and influencers. Thanks to our sponsors:ADVANCETRACK OUTSOURCING. Transform your accounting firm with AdvanceTrack. Our top-tier offshoring solutions free your team from mundane tasks, allowing you to focus on growth and client engagement. Experience seamless scalability and expert support. Visit advancetrack.com and elevate your practice today. https://www.advancetrack.comACCOUNTEX. Bringing the accounting world together with UK and international events for the accounting and tech world. https://www.accountex.co.ukIf you'd like to sponsor the show and elevate your brand with our audience, reach out to show host Rob Brown on LinkedIn and his team will reach out to fix up a chat to explore. https://www.linkedin.com/in/therobbrownYou can also check out all shows on the Accounting Influencers YouTube Channel:https://bit.ly/AI-youtube
Avalonia XPF This episode of The Modern .NET Show is supported, in part, by RJJ Software's Podcasting Services, where your podcast becomes extraordinary. Show Notes Maybe start with Generative AI. As you, I think, touched on, it's different from what we call "traditional AI." And I also want to acknowledge the term "traditional AI"l is very odd to say it's not traditional. It's very much prevalent and relevant and active — Amit Bahree Welcome to The Modern .NET Show! Formerly known as The .NET Core Podcast, we are the go-to podcast for all .NET developers worldwide and I am your host Jamie "GaProgMan" Taylor. In this episode, Amit Bahree joined us to talk about what generative AI is, what it isn't, and how it's different from, so called, "traditional AI". He also talks through his new book "Generative AI in Action by Amit Bahree," a book that I had the good fortune to read ahead of publication and can definitely recommend. I'm not asking is it going to replace an engineer, but like, can an engineer for now just ignore it a little bit? —Jamie Taylor Yeah, no. So, no, it's not replacing any engineers, I can tell you that. No. — Amit Bahree So let's sit back, open up a terminal, type in dotnet new podcast and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-6/generative-ai-for-dotnet-developers-with-amit-bahree/ Useful Links A discount code, good for 45% off all Manning Products: dotnetshow24 Generative AI in Action by Amit Bahree Phi-3 Attention Is All You Need Coding Blocks podcast Connecting with Amit: on X (formerly known as Twitter) @bahree Amit's blog Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Getting in touch: via the contact page joining the Discord Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.
What if you want to build your own copilot? Carl and Richard talk to Vishwas Lele about his new startup, which is focused on using Azure OpenAI tools to help automate the government RFP writing process. Vishwas discusses the complexities of proposal writing, how specific and complex rules exist for each part of the proposal, and the challenge of getting the software to do an excellent job on the draft. The conversation digs into the domain expertise needed for the technologies and the proposal writing itself - like all good software, it requires domain experts. But when done right, this is hugely valuable software!
What if you want to build your own copilot? Carl and Richard talk to Vishwas Lele about his new startup, which is focused on using Azure OpenAI tools to help automate the government RFP writing process. Vishwas discusses the complexities of proposal writing, how specific and complex rules exist for each part of the proposal, and the challenge of getting the software to do an excellent job on the draft. The conversation digs into the domain expertise needed for the technologies and the proposal writing itself - like all good software, it requires domain experts. But when done right, this is hugely valuable software!
Leverage Azure Cosmos DB for generative AI workloads for automatic scalability, low latency, and global distribution to handle massive data volumes and real-time processing. With support for versatile data models and built-in vector indexing, it efficiently retrieves natural language queries, making it ideal for grounding large language models. Seamlessly integrate with Azure OpenAI Studio for API-level access to GPT models and access a comprehensive gallery of open-source tools and frameworks in Azure AI Studio to enhance your AI applications. ► QUICK LINKS: 00:00 - Azure Cosmos DB for generative AI workloads 00:18 - Versatile Data Models 00:39 - Scalability and performance 01:19 - Global distribution 01:31 - Vector indexing and search 02:07 - Grounding LLMs 02:30 - Wrap up ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Get a unified solution for secure access management, identity verification, and Zero Trust security for cloud and on-premises resources. The new Microsoft Entra suite integrates five capabilities: Private Access, Internet Access, ID Protection, ID Governance, and Face Check as part of Verified ID Premium, included with Microsoft Entra Suite. With these capabilities, you can streamline user onboarding, enhance security with automated workflows, and protect against threats using Conditional Access policies. See how to reduce security gaps, block lateral attacks, and replace legacy VPNs, ensuring efficient and secure access to necessary resources. Jarred Boone, Identity Security Senior Product Manager, shares how to experience advanced security and management with Microsoft Entra Suite. ► QUICK LINKS: 00:00 - Unified solution with Microsoft Entra Suite 00:38 - Microsoft Entra Private Access 01:39 - Microsoft Entra Internet Access 02:42 - Microsoft Entra ID Protection 03:31 - Microsoft Entra ID Governance 04:18 - Face Check in Verified ID Premium, included with Microsoft Entra Suite 04:52 - How core capabilities work with onboarding process 06:08 - Protect access to resources 07:22 - Control access to internet endpoints 08:05 - Establish policies to dynamically adjust 08:45 - Wrap up ► Link References Try it out at https://entra.microsoft.com Watch our related deep dives at https://aka.ms/EntraSuitePlaylist Check out https://aka.ms/EntraSuiteDocs ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
快科技6月28日消息,汽車公司奧迪宣布將與微軟Azure OpenAI服務合作,計劃從今年7月起,為約200萬輛汽車接入ChatGPT技術,以提升汽車的語音控制功能。 從2021年起生產的第三代模組化信息娛樂系統(MIB 3)奧迪車型,將通過ChatGPT實現更自然的語音互動,讓車主在駕駛時能用自然語言查詢信息。 新車型未來搭載E3 1.2電子架構的車型,將通過Cerence Chat Pro接入ChatGPT,擴展奧迪助手(Audi assistant)的功能。 車主可以通過語音控制信息娛樂、導航、空調系統,還能提出一些常識性問題。只需說出“嘿,奧迪”或使用方向盤上的“一鍵通”按鈕,就能調用ChatGPT。 奧迪強調,所有問題和答案在處理後都會被刪除,以保護用戶隱私。 未來,奧迪用戶將能提出更複雜的汽車相關問題,例如詢問汽車的正確胎壓等。這樣的改變將使駕駛體驗更加智能和便捷。 想要學習更多? 1. 請造訪超人行銷免費索取十堂網路行銷課程:https://www.isuperman.tw 2. 加LINE官方帳號好友:https://line.me/R/ti/p/%40gyx7886l --- Send in a voice message: https://podcasters.spotify.com/pod/show/isuperman/message
ついにAzure OpenAIのドーナツ本とひとめでわかるシリーズを積める、きっかけがないと申請しにくい、などについて話しました。
Host James Caton, and Deepak Khosla at LTIMindtree, discuss how the UN High Commission for Refugees uses Azure OpenAI to accelerate on-the-ground crisis response around the world, bringing hope and relief to the most vulnerable in their greatest time of need. This episode covers: How implementing Azure AI in humanitarian organizations can have a significant impact on response times and efficiency. High-quality data is crucial for the success of AI projects. An agile approach allows for experimentation and scalability. Effective search capabilities and prompt engineering are key to obtaining accurate and relevant information. Change management and continuous monitoring are essential for successful implementation and user adoption. Learn More: About Microsoft AI About LTIMindtree More on these leaders: James Caton Deepak Khosla #GenerativeAI #Microsoft #Azure #AzureOpenAI #Chatbot
GPT-4 is onlangs gelanceerd en is nu ook beschikbaar op Azure. Bovendien was er vorige week een blogpost van Microsoft die flink wat discussie opriep. Slack - AI opt out Informatie over hoe je kunt afmelden voor AI-functies in Slack. Bron: https://slack.com/trust/data-management/privacy-principles#:~:text=To%20opt%20out%2C%20please%20have,opt%20out%20has%20been%20completed GPT-4o in Azure OpenAI Ontdek de nieuwe GPT-4o modellen die beschikbaar zijn in Azure OpenAI. Bron: https://azure.microsoft.com/en-us/updates/new-openai-model-on-azure/ Microsoft zal MFA vereisen voor alle Azure-gebruikers Microsoft kondigt aan dat Multi-Factor Authentication (MFA) binnenkort verplicht wordt voor alle Azure-gebruikers. Bron: https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/microsoft-will-require-mfa-for-all-azure-users/ba-p/4140391 Azure API Center Leer meer over de algemene beschikbaarheid van Azure API Center. Bron: https://azure.microsoft.com/en-us/updates/general-availability-azure-api-center/ System Center 2025 Aankondiging van de nieuwe versie van System Center 2025. Bron: https://techcommunity.microsoft.com/t5/system-center-blog/announcement-system-center-2025-is-here/ba-p/4138510 Azure Bastion Developer SKU Algemene beschikbaarheid van de nieuwe Developer SKU voor Azure Bastion. Bron: https://azure.microsoft.com/en-us/updates/general-availability-azure-bastion-developer-sku/ Ubuntu 24.04 LTS voor Azure Virtual Machines Ubuntu 24.04 LTS is nu algemeen beschikbaar voor Azure Virtual Machines. Bron: https://azure.microsoft.com/en-us/updates/generally-available-ubuntu-2404-lts-for-azure-virtual-machines/ M365 Message Center berichten Belangrijke updates van het Microsoft 365 Message Center. Bronnen: https://app.cloudscout.one/evergreen-item/mc793656/, https://app.cloudscout.one/evergreen-item/mc792991/, https://app.cloudscout.one/evergreen-item/mc790797/
In this week's episode, we ponder what to expect from Microsoft Build 2024. Is it worth traveling to Seattle for the event, that takes place May 21-23? We reflect on the possible announcements, the breakout session topics, as well as the keynotes. Also, Tobi asks Jussi an unexpected question.(00:00) - Intro and catching up.(05:14) - Show content starts.Show links- Microsoft Build 2024- Cloud Skills Challenge- Employ AI on Snapdragon X Elite for code generation and image creation (Seattle only)- RAG at scale: production-ready GenAI apps with Azure AI Search- (PnP team – Rob Bagby delivering a session: Take an Azure OpenAI chat application from PoC to enterprise-ready )- Optimize productivity by meeting next-level demands of the AI era- The power of AI and Copilot for Azure Databases- Power AI apps and develop rich experiences with Azure SQL Database- What's new in GitHub Copilot and the Visual Studio family- Activate enterprise data in AI-enabled business applications- Inside AI Security with Mark Russinovich- Learn how to accelerate Stable Diffusion- Deploy next generation workflows from PC to Cloud with Arm tools- Scott and Mark learn to Copilot- Running .NET on the NES - Give us feedback!
Build low-latency recommendation engines with Azure Cosmos DB and Azure OpenAI Service. Elevate user experience with vector-based semantic search, going beyond traditional keyword limitations to deliver personalized recommendations in real-time. With pre-trained models stored in Azure Cosmos DB, tailor product predictions based on user interactions and preferences. Explore the power of augmented vector search for optimized results prioritized by relevance. Kirill Gavrylyuk, Azure Cosmos DB General Manager, shows how to build recommendation systems with limitless scalability, leveraging pre-computed vectors and collaborative filtering for next-level, real-time insights. ► QUICK LINKS: 00:00 - Build a low latency recommendation engine 00:59 - Keyword search 01:46 - Vector-based semantic search 02:39 - Vector search built-in to Cosmos DB 03:56 - Model training 05:18 - Code for product predictions 06:02 - Test code for product prediction 06:39 - Augmented vector search 08:23 - Test code for augmented vector search 09:16 - Wrap up ► Link References Walk through an example at https://aka.ms/CosmosDBvectorSample Try out Cosmos DB for MongoDB for free at https://aka.ms/TryC4M ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Microsoft announced their FY24 Q3 earnings yesterday. Total revenue was reported as $61.9B, better than the $60.9B analysts were expecting. The critically important Microsoft Cloud revenue, which includes Azure, O365 Commercial, LinkedIn, Dynamics 365 and other cloud products, came in at $35.1B, representing a 23% increase year-over-year. Microsoft's Azure revenue grew 31% in the quarter, which is an acceleration from the prior quarter with 7 points coming from Microsoft's AI services. Microsoft's go-forward success and revenue growth will continue to be directly tied to Microsoft's ability to get more customers to adopt cloud products, AI solutions (Copilot for Microsoft 365, Copilot for Sales, Copilot for Service, GitHub Copilot, and Azure OpenAI) and migrate to the costly all-in Microsoft 365 E5 suite. In this podcast, our Microsoft Practice Leader, Adam Mansfield, discusses how customers can take advantage of Microsoft's needs and focus areas to ensure the right deal is struck at the negotiation table. He also covers what enterprise customers should expect from Microsoft as they prepare for their in-term (“early renewal”) or renewal negotiations. Host: Adam Mansfield: https://bit.ly/3rPGp8r Microsoft Commercial Advisory Services: https://bit.ly/2V78ADX
Introducing The Power of Collaboration: How IBM Teams Up with Microsoft from Smart Talks with IBM.Follow the show: Smart Talks with IBMEntire industries are being reshaped around the new capabilities of generative AI. In this special bonus episode of Smart Talks with IBM, Tim Harford leads a conversation between two leaders in the field. Srinivasan Venkatarajan is the Director of Global Partner Business at Microsoft, focusing on Azure Data & AI and Azure OpenAI. And Chris McGuire is the General Manager of the Global Microsoft Partnership for IBM. They discuss the efforts by IBM and Microsoft in the generative AI space, how this relationship has been providing value to clients, and why collaboration is necessary for technological progress. Visit us at: ibm.com/smarttalks Learn more about IBM Consulting for Microsoft This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.DISCLAIMER: Please note, this is an independent podcast episode not affiliated with, endorsed by, or produced in conjunction with the host podcast feed or any of its media entities. The views and opinions expressed in this episode are solely those of the creators and guests. For any concerns, please reach out to team@podroll.fm.
In this Microsoft Cloud Executive Enablement Series episode, host Amy Boyle, Director of Global Partner Enablement, GSI at Microsoft, is joined by Dave Sloan, CTO of Global Market Development, Worldwide Public Sector at Microsoft. Dave highlights the dual impact of AI on internal productivity and external citizen-facing services. Amy and Dave emphasize the importance of public sector leaders considering compliance and ethical responsibilities when harnessing AI capabilities. They discuss how Azure OpenAI ensures compliance, security, and safety for public sector organizations, aligning with international standards. Dave also encourages partners to recognize the current urgency for AI adoption in the public sector and the role of the partner ecosystem in realizing these capabilities. In This Episode You Will Learn: How AI is showing up in the public sector Why generative AI will disrupt and change traditional practices in the public sector Measures in place to align with international compliance standards Some Questions We Ask: How can Azure OpenAI ensure compliance, security, and safety for the public sector? What should public sector leaders consider when harnessing AI capabilities? Are there any current concerns about harnessing AI capabilities to serve customers better? Resources: View Dave Sloan on LinkedIn View Amy Boyle on LinkedIn Discover and follow other Microsoft podcasts at microsoft.com/podcastsDownload the Transcript Hosted on Acast. See acast.com/privacy for more information.
Entire industries are being reshaped around the new capabilities of generative AI. In this specialbonus episode of Smart Talks with IBM, Tim Harford leads a conversation between two leadersin the field. Srinivasan Venkatarajan is the Director of Global Partner Business at Microsoft,focusing on Azure Data & AI and Azure OpenAI. And Chris McGuire is the General Manager ofthe Global Microsoft Partnership for IBM. They discuss the efforts by IBM and Microsoft in the generative AI space, how this relationshiphas been providing value to clients, and why collaboration is necessary for technologicalprogress. Visit us at: ibm.com/smarttalks Learn more about IBM Consulting for Microsoft This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.
Entire industries are being reshaped around the new capabilities of generative AI. In this special bonus episode of Smart Talks with IBM, Tim Harford leads a conversation between two leaders in the field. Srinivasan Venkatarajan is the Director of Global Partner Business at Microsoft, focusing on Azure Data & AI and Azure OpenAI. And Chris McGuire is the General Manager of the Global Microsoft Partnership for IBM. They discuss the efforts by IBM and Microsoft in the generative AI space, how this relationship has been providing value to clients, and why collaboration is necessary for technological progress. Visit us at: ibm.com/smarttalks Learn more about IBM Consulting for Microsoft This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.
Entire industries are being reshaped around the new capabilities of generative AI. In this special bonus episode of Smart Talks with IBM, Tim Harford leads a conversation between two leaders in the field. Srinivasan Venkatarajan is the Director of Global Partner Business at Microsoft, focusing on Azure Data & AI and Azure OpenAI. And Chris McGuire is the General Manager of the Global Microsoft Partnership for IBM. They discuss the efforts by IBM and Microsoft in the generative AI space, how this relationship has been providing value to clients, and why collaboration is necessary for technological progress. Visit us at: ibm.com/smarttalks Learn more about IBM Consulting for Microsoft This is a paid advertisement from IBM.See omnystudio.com/listener for privacy information.
James chats with Mario López Baratas, VP & Chief Innovation Officer at Bravent about the power of Azure OpenAI Service to transform industry. Tune in to learn:How Azure OpenAI Service is enabled for an on-premises environment, with confidential customer data.The impact of generative AI in a multi-lingual environment. How Bravent accelerated data insights for plant-floor technical and non-technical users alike.Learn more:About Microsoft AI Contact BraventFollow these leaders on LinkedIn:James Caton Mario López Baratas
TL;DR: You can now buy tickets, apply to speak, or join the expo for the biggest AI Engineer event of 2024. We're gathering *everyone* you want to meet - see you this June.In last year's the Rise of the AI Engineer we put our money where our mouth was and announced the AI Engineer Summit, which fortunately went well:With ~500 live attendees and over ~500k views online, the first iteration of the AI Engineer industry affair seemed to be well received. Competing in an expensive city with 3 other more established AI conferences in the fall calendar, we broke through in terms of in-person experience and online impact.So at the end of Day 2 we announced our second event: the AI Engineer World's Fair. The new website is now live, together with our new presenting sponsor:We were delighted to invite both Ben Dunphy, co-organizer of the conference and Sam Schillace, the deputy CTO of Microsoft who wrote some of the first Laws of AI Engineering while working with early releases of GPT-4, on the pod to talk about the conference and how Microsoft is all-in on AI Engineering.Rise of the Planet of the AI EngineerSince the first AI Engineer piece, AI Engineering has exploded:and the title has been adopted across OpenAI, Meta, IBM, and many, many other companies:1 year on, it is clear that AI Engineering is not only in full swing, but is an emerging global industry that is successfully bridging the gap:* between research and product, * between general-purpose foundation models and in-context use-cases, * and between the flashy weekend MVP (still great!) and the reliable, rigorously evaluated AI product deployed at massive scale, assisting hundreds of employees and driving millions in profit.The greatly increased scope of the 2024 AI Engineer World's Fair (more stages, more talks, more speakers, more attendees, more expo…) helps us reflect the growth of AI Engineering in three major dimensions:* Global Representation: the 2023 Summit was a mostly-American affair. This year we plan to have speakers from top AI companies across five continents, and explore the vast diversity of approaches to AI across global contexts.* Topic Coverage: * In 2023, the Summit focused on the initial questions that the community wrestled with - LLM frameworks, RAG and Vector Databases, Code Copilots and AI Agents. Those are evergreen problems that just got deeper.* This year the AI Engineering field has also embraced new core disciplines with more explicit focus on Multimodality, Evals and Ops, Open Source Models and GPU/Inference Hardware providers.* Maturity/Production-readiness: Two new tracks are dedicated toward AI in the Enterprise, government, education, finance, and more highly regulated industries or AI deployed at larger scale: * AI in the Fortune 500, covering at-scale production deployments of AI, and* AI Leadership, a closed-door, side event for technical AI leaders to discuss engineering and product leadership challenges as VPs and Heads of AI in their respective orgs.We hope you will join Microsoft and the rest of us as either speaker, exhibitor, or attendee, in San Francisco this June. Contact us with any enquiries that don't fall into the categories mentioned below.Show Notes* Ben Dunphy* 2023 Summit* GitHub confirmed $100m ARR on stage* History of World's Fairs* Sam Schillace* Writely on Acquired.fm* Early Lessons From GPT-4: The Schillace Laws* Semantic Kernel* Sam on Kevin Scott (Microsoft CTO)'s podcast in 2022* AI Engineer World's Fair (SF, Jun 25-27)* Buy Super Early Bird tickets (Listeners can use LATENTSPACE for $100 off any ticket until April 8, or use GROUP if coming in 4 or more)* Submit talks and workshops for Speaker CFPs (by April 8)* Enquire about Expo Sponsorship (Asap.. selling fast)Timestamps* [00:00:16] Intro* [00:01:04] 2023 AI Engineer Summit* [00:03:11] Vendor Neutral* [00:05:33] 2024 AIE World's Fair* [00:07:34] AIE World's Fair: 9 Tracks* [00:08:58] AIE World's Fair Keynotes* [00:09:33] Introducing Sam* [00:12:17] AI in 2020s vs the Cloud in 2000s* [00:13:46] Syntax vs Semantics* [00:14:22] Bill Gates vs GPT-4* [00:16:28] Semantic Kernel and Schillace's Laws of AI Engineering* [00:17:29] Orchestration: Break it into pieces* [00:19:52] Prompt Engineering: Ask Smart to Get Smart* [00:21:57] Think with the model, Plan with Code* [00:23:12] Metacognition vs Stochasticity* [00:24:43] Generating Synthetic Textbooks* [00:26:24] Trade leverage for precision; use interaction to mitigate* [00:27:18] Code is for syntax and process; models are for semantics and intent.* [00:28:46] Hands on AI Leadership* [00:33:18] Multimodality vs "Text is the universal wire protocol"* [00:35:46] Azure OpenAI vs Microsoft Research vs Microsoft AI Division* [00:39:40] On Satya* [00:40:44] Sam at AI Leadership Track* [00:42:05] Final Plug for Tickets & CFPTranscript[00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co host Swyx, founder of Small[00:00:16] Intro[00:00:16] swyx: AI. Hey, hey, we're back again with a very special episode, this time with two guests and talking about the very in person events rather than online stuff.[00:00:27] swyx: So first I want to welcome Ben Dunphy, who is my co organizer on AI engineer conferences. Hey, hey, how's it going? We have a very special guest. Anyone who's looking at the show notes and the title will preview this later. But I guess we want to set the context. We are effectively doing promo for the upcoming AI Engineer World's Fair that's happening in June.[00:00:49] swyx: But maybe something that we haven't actually recapped much on the pod is just the origin of the AI Engineer Summit and why, what happens and what went down. Ben, I don't know if you'd like to start with the raw numbers that people should have in mind.[00:01:04] 2023 AI Engineer Summit[00:01:04] Ben Dunphy: Yeah, perhaps your listeners would like just a quick background on the summit.[00:01:09] Ben Dunphy: I mean, I'm sure many folks have heard of our events. You know, you launched, we launched the AI Engineer Summit last June with your, your article kind of coining the term that was on the tip of everyone's tongue, but curiously had not been actually coined, which is the term AI Engineer, which is now many people's, Job titles, you know, we're seeing a lot more people come to this event, with the job description of AI engineer, with the job title of AI engineer so, is an event that you and I really talked about since February of 2023, when we met at a hackathon you organized we were both excited by this movement and it hasn't really had a name yet.[00:01:48] Ben Dunphy: We decided that an event was warranted and that's why we move forward with the AI Engineer Summit, which Ended up being a great success. You know, we had over 5, 000 people apply to attend in person. We had over 9, 000 folks attend, online with over 20, 000 on the live stream.[00:02:06] Ben Dunphy: In person, we accepted about 400 attendees and had speakers, workshop instructors and sponsors, all congregating in San Francisco over, two days, um, two and a half days with a, with a welcome reception. So it was quite the event to kick off kind of this movement that's turning into quite an exciting[00:02:24] swyx: industry.[00:02:25] swyx: The overall idea of this is that I kind of view AI engineering, at least in all my work in Latent Space and the other stuff, as starting an industry.[00:02:34] swyx: And I think every industry, every new community, needs a place to congregate. And I definitely think that AI engineer, at least at the conference, is that it's meant to be like the biggest gathering of technical engineering people working with AI. Right. I think we kind of got that spot last year. There was a very competitive conference season, especially in San Francisco.[00:02:54] swyx: But I think as far as I understand, in terms of cultural impact, online impact, and the speakers that people want to see, we, we got them all and it was very important for us to be a vendor neutral type of event. Right. , The reason I partnered with Ben is that Ben has a lot of experience, a lot more experience doing vendor neutral stuff.[00:03:11] Vendor Neutral[00:03:11] swyx: I first met you when I was speaking at one of your events, and now we're sort of business partners on that. And yeah, I mean, I don't know if you have any sort of Thoughts on make, making things vendor neutral, making things more of a community industry conference rather than like something that's owned by one company.[00:03:25] swyx: Yeah.[00:03:25] Ben Dunphy: I mean events that are owned by a company are great, but this is typically where you have product pitches and this smaller internet community. But if you want the truly internet community, if you want a more varied audience and you know, frankly, better content for, especially for a technical audience, you want a vendor neutral event. And this is because when you have folks that are running the event that are focused on one thing and one thing alone, which is quality, quality of content, quality of speakers, quality of the in person experience, and just of general relevance it really elevates everything to the next level.[00:04:01] Ben Dunphy: And when you have someone like yourself who's coming To this content curation the role that you take at this event, and bringing that neutrality with, along with your experience, that really helps to take it to the next level, and then when you have someone like myself, focusing on just the program curation, and the in person experience, then both of our forces combined, we can like, really create this epic event, and so, these vendor neutral events if you've been to a small community event, Typically, these are vendor neutral, but also if you've been to a really, really popular industry event, many of the top industry events are actually vendor neutral.[00:04:37] Ben Dunphy: And that's because of the fact that they're vendor neutral, not in spite of[00:04:41] swyx: it. Yeah, I've been pretty open about the fact that my dream is to build the KubeCon of AI. So if anyone has been in the Kubernetes world, they'll understand what that means. And then, or, or instead of the NeurIPS, NeurIPS for engineers, where engineers are the stars and engineers are sharing their knowledge.[00:04:57] swyx: Perspectives, because I think AI is definitely moving over from research to engineering and production. I think one of my favorite parts was just honestly having GitHub and Microsoft support, which we'll cover in a bit, but you know, announcing finally that GitHub's copilot was such a commercial success I think was the first time that was actually confirmed by anyone in public.[00:05:17] swyx: For me, it's also interesting as sort of the conference curator to put Microsoft next to competitors some of which might be much smaller AI startups and to see what, where different companies are innovating in different areas.[00:05:27] swyx: Well, they're next to[00:05:27] Ben Dunphy: each other in the arena. So they can be next to each other on stage too.[00:05:33] Why AIE World's Fair[00:05:33] swyx: Okay, so this year World's Fair we are going a lot bigger what details are we disclosing right now? Yeah,[00:05:39] Ben Dunphy: I guess we should start with the name why are we calling it the World's Fair? And I think we need to go back to what inspired this, what actually the original World's Fair was, which was it started in the late 1700s and went to the early 1900s.[00:05:53] Ben Dunphy: And it was intended to showcase the incredible achievements. Of nation states, corporations, individuals in these grand expos. So you have these miniature cities actually being built for these grand expos. In San Francisco, for example, you had the entire Marina District built up in absolutely new construction to showcase the achievements of industry, architecture, art, and culture.[00:06:16] Ben Dunphy: And many of your listeners will know that in 1893, the Nikola Tesla famously provided power to the Chicago World's Fair with his 8 seat power generator. There's lots of great movies and documentaries about this. That was the first electric World's Fair, which thereafter it was referred to as the White City.[00:06:33] Ben Dunphy: So in today's world we have technological change that's similar to what was experienced during the industrial revolution in how it's, how it's just upending our entire life, how we live, work, and play. And so we have artificial intelligence, which has long been the dream of humanity.[00:06:51] Ben Dunphy: It's, it's finally here. And the pace of technological change is just accelerating. So with this event, as you mentioned, we, we're aiming to create a singular event where the world's foremost experts, builders, and practitioners can come together to exchange and reflect. And we think this is not only good for business, but it's also good for our mental health.[00:07:12] Ben Dunphy: It slows things down a bit from the Twitter news cycle to an in person festival of smiles, handshakes, connections, and in depth conversations that online media and online events can only ever dream of replicating. So this is an expo led event where the world's top companies will mingle with the world's top founders and AI engineers who are building and enhanced by AI.[00:07:34] AIE World's Fair: 9 Tracks[00:07:34] Ben Dunphy: And not to mention, we're featuring over a hundred talks and workshops across[00:07:37] swyx: nine tracks. Yeah, I mean, those nine tracks will be fun. Actually, do we have a little preview of the tracks in the, the speakers?[00:07:43] Ben Dunphy: We do. Folks can actually see them today at our website. We've updated that at ai.[00:07:48] Ben Dunphy: engineer. So we'd encourage them to go there to see that. But for those just listening, we have nine tracks. So we have multimodality. We have retrieval augmented generation. Featuring LLM frameworks and vector databases, evals and LLM ops, open source models, code gen and dev tools, GPUs and inference, AI agent applications, AI in the fortune 500, and then we have a special track for AI leadership which you can access by purchasing the VP pass which is different from the, the other passes we have.[00:08:20] Ben Dunphy: And I won't go into the Each of these tracks in depth, unless you want to, Swyx but there's more details on the website at ai. engineer.[00:08:28] swyx: I mean, I, I, very much looking forward to talking to our special guests for the last track, I think, which is the what a lot of yeah, leaders are thinking about, which is how to, Inspire innovation in their companies, especially the sort of larger organizations that might not have the in house talents for that kind of stuff.[00:08:47] swyx: So yeah, we can talk about the expo, but I'm very keen to talk about the presenting sponsor if you want to go slightly out of order from our original plan.[00:08:58] AIE World's Fair Keynotes[00:08:58] Ben Dunphy: Yeah, absolutely. So you know, for the stage of keynotes, we have talks confirmed from Microsoft, OpenAI, AWS, and Google.[00:09:06] Ben Dunphy: And our presenting sponsor is joining the stage with those folks. And so that presenting sponsor this year is a dream sponsor. It's Microsoft. It's the company really helping to lead the charge. And into this wonderful new era that we're all taking part in. So, yeah,[00:09:20] swyx: you know, a bit of context, like when we first started planning this thing, I was kind of brainstorming, like, who would we like to get as the ideal presenting sponsors, as ideal partners long term, just in terms of encouraging the AI engineering industry, and it was Microsoft.[00:09:33] Introducing Sam[00:09:33] swyx: So Sam, I'm very excited to welcome you onto the podcast. You are CVP and Deputy CTO of Microsoft. Welcome.[00:09:40] Sam Schillace: Nice to be here. I'm looking forward to, I was looking for, to Lessio saying my last name correctly this time. Oh[00:09:45] swyx: yeah. So I, I studiously avoided saying, saying your last name, but apparently it's an Italian last name.[00:09:50] swyx: Ski Lache. Ski[00:09:51] Alessio: Lache. Yeah. No, that, that's great, Sean. That's great as a musical person.[00:09:54] swyx: And it, it's also, yeah, I pay attention to like the, the, the lilt. So it's ski lache and the, the slow slowing of the law is, is what I focused[00:10:03] Sam Schillace: on. You say both Ls. There's no silent letters, you say[00:10:07] Alessio: both of those. And it's great to have you, Sam.[00:10:09] Alessio: You know, we've known each other now for a year and a half, two years, and our first conversation, well, it was at Lobby Conference, and then we had a really good one in the kind of parking lot of a Safeway, because we didn't want to go into Starbucks to meet, so we sat outside for about an hour, an hour and a half, and then you had to go to a Bluegrass concert, so it was great.[00:10:28] Alessio: Great meeting, and now, finally, we have you on Lanespace.[00:10:31] Sam Schillace: Cool, cool. Yeah, I'm happy to be here. It's funny, I was just saying to Swyx before you joined that, like, it's kind of an intimidating podcast. Like, when I listen to this podcast, it seems to be, like, one of the more intelligent ones, like, more, more, like, deep technical folks on it.[00:10:44] Sam Schillace: So, it's, like, it's kind of nice to be here. It's fun. Bring your A game. Hopefully I'll, I'll bring mine. I[00:10:49] swyx: mean, you've been programming for longer than some of our listeners have been alive, so I don't think your technical chops are in any doubt. So you were responsible for Rightly as one of your early wins in your career, which then became Google Docs, and obviously you were then responsible for a lot more G Suite.[00:11:07] swyx: But did you know that you covered in Acquired. fm episode 9, which is one of the podcasts that we model after.[00:11:13] Sam Schillace: Oh, cool. I didn't, I didn't realize that the most fun way to say this is that I still have to this day in my personal GDocs account, the very first Google doc, like I actually have it.[00:11:24] Sam Schillace: And I looked it up, like it occurred to me like six months ago that it was probably around and I went and looked and it's still there. So it's like, and it's kind of a funny thing. Cause it's like the backend has been rewritten at least twice that I know of the front end has been re rewritten at least twice that I know of.[00:11:38] Sam Schillace: So. I'm not sure what sense it's still the original one it's sort of more the idea of the original one, like the NFT of it would probably be more authentic. I[00:11:46] swyx: still have it. It's a ship athesia thing. Does it, does it say hello world or something more mundane?[00:11:52] Sam Schillace: It's, it's, it's me and Steve Newman trying to figure out if some collaboration stuff is working, and also a picture of Edna from the Incredibles that I probably pasted in later, because that's That's too early for that, I think.[00:12:05] swyx: People can look up your LinkedIn, and we're going to link it on the show notes, but you're also SVP of engineering for Box, and then you went back to Google to do Google, to lead Google Maps, and now you're deputy CTO.[00:12:17] AI in 2020s vs the Cloud in 2000s[00:12:17] swyx: I mean, there's so many places to start, but maybe one place I like to start off with is do you have a personal GPT 4 experience.[00:12:25] swyx: Obviously being at Microsoft, you have, you had early access and everyone talks about Bill Gates's[00:12:30] Sam Schillace: demo. Yeah, it's kind of, yeah, that's, it's kind of interesting. Like, yeah, we got access, I got access to it like in September of 2022, I guess, like before it was really released. And I it like almost instantly was just like mind blowing to me how good it was.[00:12:47] Sam Schillace: I would try experiments like very early on, like I play music. There's this thing called ABC notation. That's like an ASCII way to represent music. And like, I was like, I wonder if it can like compose a fiddle tune. And like it composed a fiddle tune. I'm like, I wonder if it can change key, change the key.[00:13:01] Sam Schillace: Like it's like really, it was like very astonishing. And I sort of, I'm very like abstract. My background is actually more math than CS. I'm a very abstract thinker and sort of categorical thinker. And the, the thing that occurred to me with, with GPT 4 the first time I saw it was. This is really like the beginning, it's the beginning of V2 of the computer industry completely.[00:13:23] Sam Schillace: I had the same feeling I had when, of like a category shifting that I had when the cloud stuff happened with the GDocs stuff, right? Where it's just like, all of a sudden this like huge vista opens up of capabilities. And I think the way I characterized it, which is a little bit nerdy, but I'm a nerd so lean into it is like everything until now has been about syntax.[00:13:46] Syntax vs Semantics[00:13:46] Sam Schillace: Like, we have to do mediation. We have to describe the real world in forms that the digital world can manage. And so we're the mediation, and we, like, do that via things like syntax and schema and programming languages. And all of a sudden, like, this opens the door to semantics, where, like, you can express intention and meaning and nuance and fuzziness.[00:14:04] Sam Schillace: And the machine itself is doing, the model itself is doing a bunch of the mediation for you. And like, that's obviously like complicated. We can talk about the limits and stuff, and it's getting better in some ways. And we're learning things and all kinds of stuff is going on around it, obviously.[00:14:18] Sam Schillace: But like, that was my immediate reaction to it was just like, Oh my God.[00:14:22] Bill Gates vs GPT-4[00:14:22] Sam Schillace: Like, and then I heard about the build demo where like Bill had been telling Kevin Scott this, This investment is a waste. It's never going to work. AI is blah, blah, blah. And come back when it can pass like an AP bio exam.[00:14:33] Sam Schillace: And they actually literally did that at one point, they brought in like the world champion of the, like the AP bio test or whatever the AP competition and like it and chat GPT or GPT 4 both did the AP bio and GPT 4 beat her. So that was the moment that convinced Bill that this was actually real.[00:14:53] Sam Schillace: Yeah, it's fun. I had a moment with him actually about three weeks after that when we had been, so I started like diving in on developer tools almost immediately and I built this thing with a small team that's called the Semantic Kernel which is one of the very early orchestrators just because I wanted to be able to put code and And inference together.[00:15:10] Sam Schillace: And that's probably something we should dig into more deeply. Cause I think there's some good insights in there, but I I had a bunch of stuff that we were building and then I was asked to go meet with Bill Gates about it and he's kind of famously skeptical and, and so I was a little bit nervous to meet him the first time.[00:15:25] Sam Schillace: And I started the conversation with, Hey, Bill, like three weeks ago, you would have called BS on everything I'm about to show you. And I would probably have agreed with you, but we've both seen this thing. And so we both know it's real. So let's skip that part and like, talk about what's possible.[00:15:39] Sam Schillace: And then we just had this kind of fun, open ended conversation and I showed him a bunch of stuff. So that was like a really nice, fun, fun moment as well. Well,[00:15:46] swyx: that's a nice way to meet Bill Gates and impress[00:15:48] Sam Schillace: him. A little funny. I mean, it's like, I wasn't sure what he would think of me, given what I've done and his.[00:15:54] Sam Schillace: Crown Jewel. But he was nice. I think he likes[00:15:59] swyx: GDocs. Crown Jewel as in Google Docs versus Microsoft Word? Office.[00:16:03] Sam Schillace: Yeah. Yeah, versus Office. Yeah, like, I think, I mean, I can imagine him not liking, I met Steven Snofsky once and he sort of respectfully, but sort of grimaced at me. You know, like, because of how much trauma I had caused him.[00:16:18] Sam Schillace: So Bill was very nice to[00:16:20] swyx: me. In general it's like friendly competition, right? They keep you, they keep you sharp, you keep each[00:16:24] Sam Schillace: other sharp. Yeah, no, I think that's, it's definitely respect, it's just kind of funny.[00:16:28] Semantic Kernel and Schillace's Laws of AI Engineering[00:16:28] Sam Schillace: Yeah,[00:16:28] swyx: So, speaking of semantic kernel, I had no idea that you were that deeply involved, that you actually had laws named after you.[00:16:35] swyx: This only came up after looking into you for a little bit. Skelatches laws, how did those, what's the, what's the origin[00:16:41] Sam Schillace: story? Hey! Yeah, that's kind of funny. I'm actually kind of a modest person and so I'm sure I feel about having my name attached to them. Although I do agree with all, I believe all of them because I wrote all of them.[00:16:49] Sam Schillace: This is like a designer, John Might, who works with me, decided to stick my name on them and put them out there. Seriously, but like, well, but like, so this was just I, I'm not, I don't build models. Like I'm not an AI engineer in the sense of, of like AI researcher that's like doing inference. Like I'm somebody who's like consuming the models.[00:17:09] Sam Schillace: Exactly. So it's kind of funny when you're talking about AI engineering, like it's a good way of putting it. Cause that's how like I think about myself. I'm like, I'm an app builder. I just want to build with this tool. Yep. And so we spent all of the fall and into the winter in that first year, like Just trying to build stuff and learn how this tool worked.[00:17:29] Orchestration: Break it into pieces[00:17:29] Sam Schillace: And I guess those are a little bit in the spirit of like Robert Bentley's programming pearls or something. I was just like, let's kind of distill some of these ideas down of like. How does this thing work? I saw something I still see today with people doing like inference is still kind of expensive.[00:17:46] Sam Schillace: GPUs are still kind of scarce. And so people try to get everything done in like one shot. And so there's all this like prompt tuning to get things working. And one of the first laws was like, break it into pieces. Like if it's hard for you, it's going to be hard for the model. But if it's you know, there's this kind of weird thing where like, it's.[00:18:02] Sam Schillace: It's absolutely not a human being, but starting to think about, like, how would I solve the problem is often a good way to figure out how to architect the program so that the model can solve the problem. So, like, that was one of the first laws. That came from me just trying to, like, replicate a test of a, like, a more complicated, There's like a reasoning process that you have to go through that, that Google was, was the react, the react thing, and I was trying to get GPT 4 to do it on its own.[00:18:32] Sam Schillace: And, and so I'd ask it the question that was in this paper, and the answer to the question is like the year 2000. It's like, what year did this particular author who wrote this book live in this country? And you've kind of got to carefully reason through it. And like, I could not get GPT 4 to Just to answer the question with the year 2000.[00:18:50] Sam Schillace: And if you're thinking about this as like the kernel is like a pipelined orchestrator, right? It's like very Unix y, where like you have a, some kind of command and you pipe stuff to the next parameters and output to the next thing. So I'm thinking about this as like one module in like a pipeline, and I just want it to give me the answer.[00:19:05] Sam Schillace: I don't want anything else. And I could not prompt engineer my way out of that. I just like, it was giving me a paragraph or reasoning. And so I sort of like anthropomorphized a little bit and I was like, well, the only way you can think about stuff is it can think out loud because there's nothing else that the model does.[00:19:19] Sam Schillace: It's just doing token generation. And so it's not going to be able to do this reasoning if it can't think out loud. And that's why it's always producing this. But if you take that paragraph of output, which did get to the right answer and you pipe it into a second prompt. That just says read this conversation and just extract the answer and report it back.[00:19:38] Sam Schillace: That's an easier task. That would be an easier task for you to do or me to do. It's easier reasoning. And so it's an easier thing for the model to do and it's much more accurate. And that's like 100 percent accurate. It always does that. So like that was one of those, those insights on the that led to the, the choice loss.[00:19:52] Prompt Engineering: Ask Smart to Get Smart[00:19:52] Sam Schillace: I think one of the other ones that's kind of interesting that I think people still don't fully appreciate is that GPT 4 is the rough equivalent of like a human being sitting down for centuries or millennia and reading all the books that they can find. It's this vast mind, right, and the embedding space, the latent space, is 100, 000 K, 100, 000 dimensional space, right?[00:20:14] Sam Schillace: Like it's this huge, high dimensional space, and we don't have good, um, Intuition about high dimensional spaces, like the topology works in really weird ways, connectivity works in weird ways. So a lot of what we're doing is like aiming the attention of a model into some part of this very weirdly connected space.[00:20:30] Sam Schillace: That's kind of what prompt engineering is. But that kind of, like, what we observed to begin with that led to one of those laws was You know, ask smart to get smart. And I think we've all, we all understand this now, right? Like this is the whole field of prompt engineering. But like, if you ask like a simple, a simplistic question of the model, you'll get kind of a simplistic answer.[00:20:50] Sam Schillace: Cause you're pointing it at a simplistic part of that high dimensional space. And if you ask it a more intelligent question, you get more intelligent stuff back out. And so I think that's part of like how you think about programming as well. It's like, how are you directing the attention of the model?[00:21:04] Sam Schillace: And I think we still don't have a good intuitive feel for that. To me,[00:21:08] Alessio: the most interesting thing is how do you tie the ask smart, get smart with the syntax and semantics piece. I gave a talk at GDC last week about the rise of full stack employees and how these models are like semantic representation of tasks that people do.[00:21:23] Alessio: But at the same time, we have code. Also become semantic representation of code. You know, I give you the example of like Python that sort it's like really a semantic function. It's not code, but it's actually code underneath. How do you think about tying the two together where you have code?[00:21:39] Alessio: To then extract the smart parts so that you don't have to like ask smart every time and like kind of wrap them in like higher level functions.[00:21:46] Sam Schillace: Yeah, this is, this is actually, we're skipping ahead to kind of later in the conversation, but I like to, I usually like to still stuff down in these little aphorisms that kind of help me remember them.[00:21:57] Think with the model, Plan with Code[00:21:57] Sam Schillace: You know, so we can dig into a bunch of them. One of them is pixels are free, one of them is bots are docs. But the one that's interesting here is Think with the model, plan with code. And so one of the things, so one of the things we've realized, we've been trying to do lots of these like longer running tasks.[00:22:13] Sam Schillace: Like we did this thing called the infinite chatbot, which was the successor to the semantic kernel, which is an internal project. It's a lot like GPTs. The open AI GPT is, but it's like a little bit more advanced in some ways, kind of deep exploration of a rag based bot system. And then we did multi agents from that, trying to do some autonomy stuff and we're, and we're kind of banging our head against this thing.[00:22:34] Sam Schillace: And you know, one of the things I started to realize, this is going to get nerdy for a second. I apologize, but let me dig in on it for just a second. No apology needed. Um, we realized is like, again, this is a little bit of an anthropomorphism and an illusion that we're having. So like when we look at these models, we think there's something continuous there.[00:22:51] Sam Schillace: We're having a conversation with chat GPT or whatever with Azure open air or like, like what's really happened. It's a little bit like watching claymation, right? Like when you watch claymation, you don't think that the model is actually the clay model is actually really alive. You know, that there's like a bunch of still disconnected slot screens that your mind is connecting into a continuous experience.[00:23:12] Metacognition vs Stochasticity[00:23:12] Sam Schillace: And that's kind of the same thing that's going on with these models. Like they're all the prompts are disconnected no matter what. Which means you're putting a lot of weight on memory, right? This is the thing we talked about. You're like, you're putting a lot of weight on precision and recall of your memory system.[00:23:27] Sam Schillace: And so like, and it turns out like, because the models are stochastic, they're kind of random. They'll make stuff up if things are missing. If you're naive about your, your memory system, you'll get lots of like accumulated similar memories that will kind of clog the system, things like that. So there's lots of ways in which like, Memory is hard to manage well, and, and, and that's okay.[00:23:47] Sam Schillace: But what happens is when you're doing plans and you're doing these longer running things that you're talking about, that second level, the metacognition is very vulnerable to that stochastic noise, which is like, I totally want to put this on a bumper sticker that like metacognition is susceptible to stochasticity would be like the great bumper sticker.[00:24:07] Sam Schillace: So what, these things are very vulnerable to feedback loops when they're trying to do autonomy, and they're very vulnerable to getting lost. So we've had these, like, multi agent Autonomous agent things get kind of stuck on like complimenting each other, or they'll get stuck on being quote unquote frustrated and they'll go on strike.[00:24:22] Sam Schillace: Like there's all kinds of weird like feedback loops you get into. So what we've learned to answer your question of how you put all this stuff together is You have to, the model's good at thinking, but it's not good at planning. So you do planning in code. So you have to describe the larger process of what you're doing in code somehow.[00:24:38] Sam Schillace: So semantic intent or whatever. And then you let the model kind of fill in the pieces.[00:24:43] Generating Synthetic Textbooks[00:24:43] Sam Schillace: I'll give a less abstract like example. It's a little bit of an old example. I did this like last year, but at one point I wanted to see if I could generate textbooks. And so I wrote this thing called the textbook factory.[00:24:53] Sam Schillace: And it's, it's tiny. It's like a Jupyter notebook with like. You know, 200 lines of Python and like six very short prompts, but what you basically give it a sentence. And it like pulls out the topic and the level of, of, from that sentence, so you, like, I would like fifth grade reading. I would like eighth grade English.[00:25:11] Sam Schillace: His English ninth grade, US history, whatever. That by the way, all, all by itself, like would've been an almost impossible job like three years ago. Isn't, it's like totally amazing like that by itself. Just parsing an arbitrary natural language sentence to get these two pieces of information out is like almost trivial now.[00:25:27] Sam Schillace: Which is amazing. So it takes that and it just like makes like a thousand calls to the API and it goes and builds a full year textbook, like decides what the curriculum is with one of the prompts. It breaks it into chapters. It writes all the lessons and lesson plans and like builds a teacher's guide with all the answers to all the questions.[00:25:42] Sam Schillace: It builds a table of contents, like all that stuff. It's super reliable. You always get a textbook. It's super brittle. You never get a cookbook or a novel like but like you could kind of define that domain pretty care, like I can describe. The metacognition, the high level plan for how do you write a textbook, right?[00:25:59] Sam Schillace: You like decide the curriculum and then you write all the chapters and you write the teacher's guide and you write the table content, like you can, you can describe that out pretty well. And so having that like code exoskeleton wrapped around the model is really helpful, like it keeps the model from drifting off and then you don't have as many of these vulnerabilities around memory that you would normally have.[00:26:19] Sam Schillace: So like, that's kind of, I think where the syntax and semantics comes together right now.[00:26:24] Trade leverage for precision; use interaction to mitigate[00:26:24] Sam Schillace: And then I think the question for all of us is. How do you get more leverage out of that? Right? So one of the things that I don't love about virtually everything anyone's built for the last year and a half is people are holding the hands of the model on everything.[00:26:37] Sam Schillace: Like the leverage is very low, right? You can't turn. These things loose to do anything really interesting for very long. You can kind of, and the places where people are getting more work out per unit of work in are usually where somebody has done exactly what I just described. They've kind of figured out what the pattern of the problem is in enough of a way that they can write some code for it.[00:26:59] Sam Schillace: And then that that like, so I've seen like sales support stuff. I've seen like code base tuning stuff of like, there's lots of things that people are doing where like, you can get a lot of value in some relatively well defined domain using a little bit of the model's ability to think for you and a little, and a little bit of code.[00:27:18] Code is for syntax and process; models are for semantics and intent.[00:27:18] Sam Schillace: And then I think the next wave is like, okay, do we do stuff like domain specific languages to like make the planning capabilities better? Do we like start to build? More sophisticated primitives. We're starting to think about and talk about like power automate and a bunch of stuff inside of Microsoft that we're going to wrap in these like building blocks.[00:27:34] Sam Schillace: So the models have these chunks of reliable functionality that they can invoke as part of these plans, right? Because you don't want like, if you're going to ask the model to go do something and the output's going to be a hundred thousand lines of code, if it's got to generate that code every time, the randomness, the stochasticity is like going to make that basically not reliable.[00:27:54] Sam Schillace: You want it to generate it like a 10 or 20 line high level semantic plan for this thing that gets handed to some markup executor that runs it and that invokes that API, that 100, 000 lines of code behind it, API call. And like, that's a really nice robust system for now. And then as the models get smarter as new models emerge, then we get better plans, we get more sophistication.[00:28:17] Sam Schillace: In terms of what they can choose, things like that. Right. So I think like that feels like that's probably the path forward for a little while, at least, like there was, there was a lot there. I, sorry, like I've been thinking, you can tell I've been thinking about it a lot. Like this is kind of all I think about is like, how do you build.[00:28:31] Sam Schillace: Really high value stuff out of this. And where do we go? Yeah. The, the role where[00:28:35] swyx: we are. Yeah. The intermixing of code and, and LMS is, is a lot of the role of the AI engineer. And I, I, I think in a very real way, you were one of the first to, because obviously you had early access. Honestly, I'm surprised.[00:28:46] Hands on AI Leadership[00:28:46] swyx: How are you so hands on? How do you choose to, to dedicate your time? How do you advise other tech leaders? Right. You know, you, you are. You have people working for you, you could not be hands on, but you seem to be hands on. What's the allocation that people should have, especially if they're senior tech[00:29:03] Sam Schillace: leaders?[00:29:04] Sam Schillace: It's mostly just fun. Like, I'm a maker, and I like to build stuff. I'm a little bit idiosyncratic. I I've got ADHD, and so I won't build anything. I won't work on anything I'm bored with. So I have no discipline. If I'm not actually interested in the thing, I can't just, like, do it, force myself to do it.[00:29:17] Sam Schillace: But, I mean, if you're not interested in what's going on right now in the industry, like, go find a different industry, honestly. Like, I seriously, like, this is, I, well, it's funny, like, I don't mean to be snarky, but, like, I was at a dinner, like, a, I don't know, six months ago or something, And I was sitting next to a CTO of a large, I won't name the corporation because it would name the person, but I was sitting next to the CTO of a very large Japanese technical company, and he was like, like, nothing has been interesting since the internet, and this is interesting now, like, this is fun again.[00:29:46] Sam Schillace: And I'm like, yeah, totally, like this is like, the most interesting thing that's happened in 35 years of my career, like, we can play with semantics and natural language, and we can have these things that are like sort of active, can kind of be independent in certain ways and can do stuff for us and can like, reach all of these interesting problems.[00:30:02] Sam Schillace: So like that's part of it of it's just kind of fun to, to do stuff and to build stuff. I, I just can't, can't resist. I'm not crazy hands-on, like, I have an eng like my engineering team's listening right now. They're like probably laughing 'cause they, I never, I, I don't really touch code directly 'cause I'm so obsessive.[00:30:17] Sam Schillace: I told them like, if I start writing code, that's all I'm gonna do. And it's probably better if I stay a little bit high level and like, think about. I've got a really great couple of engineers, a bunch of engineers underneath me, a bunch of designers underneath me that are really good folks that we just bounce ideas off of back and forth and it's just really fun.[00:30:35] Sam Schillace: That's the role I came to Microsoft to do, really, was to just kind of bring some energy around innovation, some energy around consumer, We didn't know that this was coming when I joined. I joined like eight months before it hit us, but I think Kevin might've had an idea it was coming. And and then when it hit, I just kind of dove in with both feet cause it's just so much fun to do.[00:30:55] Sam Schillace: Just to tie it back a little bit to the, the Google Docs stuff. When we did rightly originally the world it's not like I built rightly in jQuery or anything. Like I built that thing on bare metal back before there were decent JavaScript VMs.[00:31:10] Sam Schillace: I was just telling somebody today, like you were rate limited. So like just computing the diff when you type something like doing the string diff, I had to write like a binary search on each end of the string diff because like you didn't have enough iterations of a for loop to search character by character.[00:31:24] Sam Schillace: I mean, like that's how rough it was none of the browsers implemented stuff directly, whatever. It's like, just really messy. And like, that's. Like, as somebody who's been doing this for a long time, like, that's the place where you want to engage, right? If things are easy, and it's easy to go do something, it's too late.[00:31:42] Sam Schillace: Even if it's not too late, it's going to be crowded, but like the right time to do something new and disruptive and technical is, first of all, still when it's controversial, but second of all, when you have this, like, you can see the future, you ask this, like, what if question, and you can see where it's going, But you have this, like, pit in your stomach as an engineer as to, like, how crappy this is going to be to do.[00:32:04] Sam Schillace: Like, that's really the right moment to engage with stuff. We're just like, this is going to suck, it's going to be messy, I don't know what the path is, I'm going to get sticks and thorns in my hair, like I, I, it's going to have false starts, and I don't really, I'm going to This is why those skeletchae laws are kind of funny, because, like, I, I, like You know, I wrote them down at one point because they were like my best guess, but I'm like half of these are probably wrong, and I think they've all held up pretty well, but I'm just like guessing along with everybody else, we're just trying to figure this thing out still, right, and like, and I think the only way to do that is to just engage with it.[00:32:34] Sam Schillace: You just have to like, build stuff. If you're, I can't tell you the number of execs I've talked to who have opinions about AI and have not sat down with anything for more than 10 minutes to like actually try to get anything done. You know, it's just like, it's incomprehensible to me that you can watch this stuff through the lens of like the press and forgive me, podcasts and feel like you actually know what you're talking about.[00:32:59] Sam Schillace: Like, you have to like build stuff. Like, break your nose on stuff and like figure out what doesn't work.[00:33:04] swyx: Yeah, I mean, I view us as a starting point, as a way for people to get exposure on what we're doing. They should be looking at, and they still have to do the work as do we. Yeah, I'll basically endorse, like, I think most of the laws.[00:33:18] Multimodality vs "Text is the universal wire protocol"[00:33:18] swyx: I think the one I question the most now is text is the universal wire protocol. There was a very popular article, a text that used a universal interface by Rune who now works at OpenAI. And I, actually, we just, we just dropped a podcast with David Luan, who's CEO of Adept now, but he was VP of Eng, and he pitched Kevin Scott for the original Microsoft investment in OpenAI.[00:33:40] swyx: Where he's basically pivoting to or just betting very hard on multimodality. I think that's something that we don't really position very well. I think this year, we're trying to all figure it out. I don't know if you have an updated perspective on multi modal models how that affects agents[00:33:54] Sam Schillace: or not.[00:33:55] Sam Schillace: Yeah, I mean, I think the multi I think multi modality is really important. And I, I think it's only going to get better from here. For sure. Yeah, the text is the universal wire protocol. You're probably right. Like, I don't know that I would defend that one entirely. Note that it doesn't say English, right?[00:34:09] Sam Schillace: Like it's, it's not, that's even natural language. Like there's stuff like Steve Luko, who's the guy who created TypeScript, created TypeChat, right? Which is this like way to get LLMs to be very precise and return syntax and correct JavaScript. So like, I, yeah, I think like multimodality, like, I think part of the challenge with it is like, it's a little harder to access.[00:34:30] Sam Schillace: Programatically still like I think you know and I do think like, You know like when when like dahly and stuff started to come Out I was like, oh photoshop's in trouble cuz like, you know I'm just gonna like describe images And you don't need photos of Photoshop anymore Which hasn't played out that way like they're actually like adding a bunch of tools who look like you want to be able to you know for multimodality be really like super super charged you need to be able to do stuff like Descriptively, like, okay, find the dog in this picture and mask around it.[00:34:58] Sam Schillace: Okay, now make it larger and whatever. You need to be able to interact with stuff textually, which we're starting to be able to do. Like, you can do some of that stuff. But there's probably a whole bunch of new capabilities that are going to come out that are going to make it more interesting.[00:35:11] Sam Schillace: So, I don't know, like, I suspect we're going to wind up looking kind of like Unix at the end of the day, where, like, there's pipes and, like, Stuff goes over pipes, and some of the pipes are byte character pipes, and some of them are byte digital or whatever like binary pipes, and that's going to be compatible with a lot of the systems we have out there, so like, that's probably still And I think there's a lot to be gotten from, from text as a language, but I suspect you're right.[00:35:37] Sam Schillace: Like that particular law is not going to hold up super well. But we didn't have multimodal going when I wrote it. I'll take one out as well.[00:35:46] Azure OpenAI vs Microsoft Research vs Microsoft AI Division[00:35:46] swyx: I know. Yeah, I mean, the innovations that keep coming out of Microsoft. You mentioned multi agent. I think you're talking about autogen.[00:35:52] swyx: But there's always research coming out of MSR. Yeah. PHY1, PHY2. Yeah, there's a bunch of[00:35:57] Sam Schillace: stuff. Yeah.[00:35:59] swyx: What should, how should the outsider or the AI engineer just as a sort of final word, like, How should they view the Microsoft portfolio things? I know you're not here to be a salesman, but What, how do you explain You know, Microsoft's AI[00:36:12] Sam Schillace: work to people.[00:36:13] Sam Schillace: There's a lot of stuff going on. Like, first of all, like, I should, I'll be a little tiny bit of a salesman for, like, two seconds and just point out that, like, one of the things we have is the Microsoft for Startups Founders Hub. So, like, you can get, like, Azure credits and stuff from us. Like, up to, like, 150 grand, I think, over four years.[00:36:29] Sam Schillace: So, like, it's actually pretty easy to get. Credit you can start, I 500 bucks to start or something with very little other than just an idea. So like there's, that's pretty cool. Like, I like Microsoft is very much all in on AI at, at many levels. And so like that, you mentioned, you mentioned Autogen, like, So I sit in the office of the CTO, Microsoft Research sits under him, under the office of the CTO as well.[00:36:51] Sam Schillace: So the Autogen group came out of somebody in MSR, like in that group. So like there's sort of. The spectrum of very researchy things going on in research, where we're doing things like Phi, which is the small language model efficiency exploration that's really, really interesting. Lots of very technical folks there that are building different kinds of models.[00:37:10] Sam Schillace: And then there's like, groups like my group that are kind of a little bit in the middle that straddle product and, and, and research and kind of have a foot in both worlds and are trying to kind of be a bridge into the product world. And then there's like a whole bunch of stuff on the product side of things.[00:37:23] Sam Schillace: So there's. All the Azure OpenAI stuff, and then there's all the stuff that's in Office and Windows. And I, so I think, like, the way, I don't know, the way to think about Microsoft is we're just powering AI at every level we can, and making it as accessible as we can to both end users and developers.[00:37:42] Sam Schillace: There's this really nice research arm at one end of that spectrum that's really driving the cutting edge. The fee stuff is really amazing. It broke the chinchella curves. Right, like we didn't, that's the textbooks are all you need paper, and it's still kind of controversial, but like that was really a surprising result that came out of MSR.[00:37:58] Sam Schillace: And so like I think Microsoft is both being a thought leader on one end, on the other end with all the Azure OpenAI, all the Azure tooling that we have, like very much a developer centric, kind of the tinkerer's paradise that Microsoft always was. It's like a great place to come and consume all these things.[00:38:14] Sam Schillace: There's really amazing stuff ideas that we've had, like these very rich, long running, rag based chatbots that we didn't talk about that are like now possible to just go build with Azure AI Studio for yourself. You can build and deploy like a chatbot that's trained on your data specifically, like very easily and things like that.[00:38:31] Sam Schillace: So like there's that end of things. And then there's all this stuff that's in Office, where like, you could just like use the copilots both in Bing, but also just like daily your daily work. So like, it's just kind of everywhere at this point, like everyone in the company thinks about it all the time.[00:38:43] Sam Schillace: There's like no single answer to that question. That was way more salesy than I thought I was capable of, but like, that is actually the genuine truth. Like, it is all the time, it is all levels, it is all the way from really pragmatic, approachable stuff for somebody starting out who doesn't know things, all the way to like Absolutely cutting edge research, silicon, models, AI for science, like, we didn't talk about any of the AI for science stuff, I've seen magical stuff coming out of the research group on that topic, like just crazy cool stuff that's coming, so.[00:39:13] Sam Schillace: You've[00:39:14] swyx: called this since you joined Microsoft. I point listeners to the podcast that you did in 2022, pre ChatGBT with Kevin Scott. And yeah, you've been saying this from the beginning. So this is not a new line of Talk track for you, like you've, you, you've been a genuine believer for a long time.[00:39:28] swyx: And,[00:39:28] Sam Schillace: and just to be clear, like I haven't been at Microsoft that long. I've only been here for like two, a little over two years and you know, it's a little bit weird for me 'cause for a lot of my career they were the competitor and the enemy and you know, it's kind of funny to be here, but like it's really remarkable.[00:39:40] On Satya[00:39:40] Sam Schillace: It's going on. I really, really like Satya. I've met a, met and worked with a bunch of big tech CEOs and I think he's a genuinely awesome person and he's fun to work with and has a really great. vision. So like, and I obviously really like Kevin, we've been friends for a long time. So it's a cool place.[00:39:56] Sam Schillace: I think there's a lot of interesting stuff. We[00:39:57] swyx: have some awareness Satya is a listener. So obviously he's super welcome on the pod anytime. You can just drop in a good word for us.[00:40:05] Sam Schillace: He's fun to talk to. It's interesting because like CEOs can be lots of different personalities, but he is you were asking me about how I'm like, so hands on and engaged.[00:40:14] Sam Schillace: I'm amazed at how hands on and engaged he can be given the scale of his job. Like, he's super, super engaged with stuff, super in the details, understands a lot of the stuff that's going on. And the science side of things, as well as the product and the business side, I mean, it's really remarkable. I don't say that, like, because he's listening or because I'm trying to pump the company, like, I'm, like, genuinely really, really impressed with, like, how, what he's, like, I look at him, I'm like, I love this stuff, and I spend all my time thinking about it, and I could not do what he's doing.[00:40:42] Sam Schillace: Like, it's just incredible how much you can get[00:40:43] Ben Dunphy: into his head.[00:40:44] Sam at AI Leadership Track[00:40:44] Ben Dunphy: Sam, it's been an absolute pleasure to hear from you here, hear the war stories. So thank you so much for coming on. Quick question though you're here on the podcast as the presenting sponsor for the AI Engineer World's Fair, will you be taking the stage there, or are we going to defer that to Satya?[00:41:01] Ben Dunphy: And I'm happy[00:41:02] Sam Schillace: to talk to folks. I'm happy to be there. It's always fun to like I, I like talking to people more than talking at people. So I don't love giving keynotes. I love giving Q and A's and like engaging with engineers and like. I really am at heart just a builder and an engineer, and like, that's what I'm happiest doing, like being creative and like building things and figuring stuff out.[00:41:22] Sam Schillace: That would be really fun to do, and I'll probably go just to like, hang out with people and hear what they're working on and working about.[00:41:28] swyx: The AI leadership track is just AI leaders, and then it's closed doors, so you know, more sort of an unconference style where people just talk[00:41:34] Sam Schillace: about their issues.[00:41:35] Sam Schillace: Yeah, that would be, that's much more fun. That's really, because we are really all wrestling with this, trying to figure out what it means. Right. So I don't think anyone I, the reason I have the Scalache laws kind of give me the willies a little bit is like, I, I was joking that we should just call them the Scalache best guesses, because like, I don't want people to think that that's like some iron law.[00:41:52] Sam Schillace: We're all trying to figure this stuff out. Right. Like some of it's right. Some it's not right. It's going to be messy. We'll have false starts, but yeah, we're all working it out. So that's the fun conversation. All[00:42:02] Ben Dunphy: right. Thanks for having me. Yeah, thanks so much for coming on.[00:42:05] Final Plug for Tickets & CFP[00:42:05] Ben Dunphy: For those of you listening, interested in attending AI Engineer World's Fair, you can purchase your tickets today.[00:42:11] Ben Dunphy: Learn more about the event at ai. engineer. You can purchase even group discounts. If you purchase four more tickets, use the code GROUP, and one of those four tickets will be free. If you want to speak at the event CFP closes April 8th, so check out the link at ai. engineer, send us your proposals for talks, workshops, or discussion groups.[00:42:33] Ben Dunphy: So if you want to come to THE event of the year for AI engineers, the technical event of the year for AI engineers this is at June 25, 26, and 27 in San Francisco. That's it! Get full access to Latent Space at www.latent.space/subscribe
I 78. episode af EDB 5.0 byder vi velkommen til Jan Simon, IT-Direktør hos Hafnia. Hafnia er verdens fjerdestørste rederi. Jan har været med på Hafnias digitaliseringsrejse siden 2018, hvor han har været med til at innovere og optimere virksomhedens operationer, med intelligente IT-løsninger. I denne episode, tager vi et dyk ned i to af de løsninger, Jan og hans team har udviklet; den interne chatbot Marvis, der forbedrer medarbejdernes adgang til data, samt 'Portcall Documents'. Ved hjælp af automatisk scanning og håndtering af havnedokumenter, har Hafnia opnået en nøjagtighed på 95%, og derved drastisk mindsket risikoen for fejl og økonomisk tab. Ydermere, vil vi kaste et blik på Hafnias forestående udrulning af Microsoft Copilot. Her diskuterer vi hvordan den skal implementeres, use cases, samt hvilke muligheder fremtiden bringer for Hafnia med denne nye teknologi. Shownotes: 00:00 – 09:51: Intro til Jan Simon, Hafnia 09:51 – 36:41: Gennemgang af Hafnia use cases(Azure OpenAI chatbot & Portcall Documents) 36:41 – 41:39: Udrulning af Microsoft Copilot i Hafnia Vært: Mathias Mengesha Emiliussen
This is a re-post from November 2023. In this episode, Thomas Betts talks with Pamela Fox, a cloud advocate in Python at Microsoft. They discuss several ChatGPT sample apps that Pamela helps maintain. These include a very popular integration of ChatGPT with Azure OpenAI and Cognitive Search for querying enterprise data with a chat interface. Pamela also covers some best practices for getting started with ChatGPT apps. Read a transcript of this interview: https://www.infoq.com/podcasts/chatgpt-enterprise-data-search/ Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: QCon London (April 8-10, 2024) Discover new ideas and insights from senior practitioners driving change and innovation in software development. https://qconlondon.com/ InfoQ Dev Summit Boston (June 24-25, 2024) Actionable insights on today's critical dev priorities. https://devsummit.infoq.com/ QCon San Francisco (November 18-22, 2024) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI Podcast www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
Michael is an ASP.NET and C# programmer who has extensive knowledge in process improvement, AI and Large Language Models, and student information systems. He also is the founder of two websites — AIStoryBuilders.com and BlazorHelpWebsite.com — both fantastic resources that help empower developers. Michael resides in Los Angeles, California, with his son Zachary and wife, Valerie. Topics of Discussion: [3:14] Michael talks about his career path. [5:15] AIStoryBuilders.com. [6:21] The vision for his book and what sets it apart from others. [9:10] What is “RAG”? Retrieval augmented generation. [12:35] How did Michael come up with the AI Story Builders name? [14:09] Keeping AI on track despite the limitations. [17:44] Models behave better when trained on more data. [21:26] How do you make the decision on which named model to use? [34:05] Where Microsoft is a leader. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! Azure OpenAI Using C# Michael Washington GitHub AI Story Builders Adefwebserver Blazor-Blogs Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
For sure, Microsoft is interested in providing European companies with AI models. But Xiaopeng has good arguments. In this expanded role, he leads the business growth and go-to-market across AI services (e.g. Azure OpenAI), databases (e.g. Azure Cosmos DB), containers (e.g. Azure Kubernetes Service) and serverless (e.g. Azure Functions). Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner **HANNOVER MESSE** https://www.hannovermesse.de/de/ Xiaopeng Li (李小鹏) [Contact](https://www.linkedin.com/in/xiaopeng-li/)
Microsoft announced their FY24 Q2 earnings yesterday. Total revenue was reported as $62B, better than the $61.1B analysts were expecting. The critically important Microsoft Cloud revenue, which includes Azure, O365 Commercial, LinkedIn, Dynamics 365 and other cloud product, came in at $33.7B, representing a 24% increase year-over-year. Microsoft's go-forward success and revenue growth will continue to be directly tied to Microsoft's ability get more customers to adopt cloud products, adopt AI solutions (Copilot for Microsoft 365, Copilot for Sales, Copilot for Service, GitHub Copilot, and Azure OpenAI) and migrate to the costly all-in Microsoft 365 E5 suite. In this podcast, our Microsoft Practice Leader, Adam Mansfield, discusses how enterprise customers can take advantage of Microsoft's needs and focus areas to ensure the right deal is struck at the negotiation table. He also covers what enterprise customers should expect from Microsoft as they prepare for their renewal negotiations. Host: Adam Mansfield: https://bit.ly/3rPGp8r Microsoft Commercial Advisory Services: https://bit.ly/2V78ADX
In episode 176 of our SAP on Azure video podcast we are going back up the stack. Last week we covered some cool infrastructure related topics, today we want to focus on AI again. You might remember that we had Noopur, CJ and also Michael in previous episodes talking about Azure OpenAI and how this could be leveraged for example in a Teams bot. Today we have Michael back with us and he shows a nice integration of Copilot Studio, Azure OpenAI and the new SAP OData Connector for Power Automate. https://github.com/mimergel Find all the links mentioned here: https://www.saponazurepodcast.de/episode176 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #AOAI #PowerPlatform #Teams ## Summary created by AI Key Topics: *SAP on Azure Video Podcast*: Holger and Robert introduce Michael Beck as the guest speaker who will show some AI scenarios with SAP and Azure. *AI scenarios with SAP and Azure*: Michael demonstrates six solutions that use Azure Open AI, Power Platform and SAP OData connector to enable chatbots, self-services, data analysis and app development for SAP data. *SAP OData connector*: Michael explains that the SAP OData connector is a new preview feature that simplifies the access to SAP OData services from the Power Platform and shows how to use it with Copilot Studio and Power Apps. *System prompts for Azure Open AI*: Michael shares some tips and best practices for creating system prompts that instruct Azure Open AI how to handle user queries and SAP data, such as using examples, feedback and context. *GitHub repositories*: Michael provides the links to his GitHub repositories where he has uploaded the solutions, presentations and videos for the AI scenarios and encourages everyone to download and try them out.
In this episode, we discuss the convergence of AI technologies as ChatGPT joins forces with Microsoft's Azure OpenAI service, examining the implications for developers, businesses, and the broader AI landscape. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn more about AI in Video Learn more about Open AI
In this episode, we explore the powerful synergy between Microsoft's Azure OpenAI service and ChatGPT, shedding light on how this collaboration enhances the capabilities and accessibility of advanced AI for developers. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn more about AI in Video Learn more about Open AI
In this episode, we break down the breaking news of ChatGPT's integration into Microsoft's Azure OpenAI service, providing insights into the technical aspects and potential applications of this collaboration. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn more about AI in Video Learn more about Open AI
In this episode, we delve into the exciting developments as ChatGPT becomes a key player in Microsoft's Azure OpenAI service, uncovering the collaborative potential and innovative possibilities it unlocks. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn more about AI in Video Learn more about Open AI
In this episode, we dissect the strategic implications of ChatGPT's availability on Microsoft's Azure OpenAI service, delving into the advantages, potential applications, and the impact on the AI landscape. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn more about AI in Video Learn more about Open AI
In this episode, we explore the recent integration of ChatGPT into Microsoft's Azure OpenAI service, discussing the implications and functionalities it brings to developers and businesses utilizing AI services. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community Learn more about AI in Video Learn more about Open AI
Build and deploy copilot style apps that leverage the power of both GPT-4 Turbo with Vision and Azure AI Vision and Search in Microsoft's Azure AI Studio. Enable direct lookups from image inputs over your organizational data to ground generative AI responses. This marks a significant improvement in the accuracy of natural language processing and image recognition tasks to enable new generative AI scenarios. Video inputs are also uniquely supported when you combine GPT-4 Turbo with Vision and Azure AI Vision. Seth Juarez, Principal Program Manager for Azure AI, shares how it's easy to build and orchestrate powerful copilot style apps. ► QUICK LINKS: 00:00 - GPT-4 Turbo with Vision + Azure AI Vision 00:42 - Baseline capabilities of GPT-4 Turbo with Vision 02:43 - Direct lookups of image and video data 04:53 - See the two combined: Demo 05:52 - How to build it 07:17 - See the code behind your app 08:07 - Wrap up ► Link References Start using Azure AI Studio today at https://ai.azure.com Watch a detailed overview at https://aka.ms/AzureAIStudioMechanics Check out our QuickStart guides at https://aka.ms/LearnAIStudio ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Build, test, deploy, and monitor your generative AI apps at scale from one place with Azure AI Studio. Access models in the Azure OpenAI service from Meta, NVIDIA and Microsoft Research, as well hundreds of open-source models. Integrate your own data across multiple data sets to ground your model, which is made easier through direct integration with OneLake in Microsoft Fabric. It uses shortcuts to let you bring in virtualized data sets across your data estate without having to move them. Use Azure AI Studio for full lifecycle development from a unified playground for prompt engineering, to pre-built Azure AI skills to build multi-modal applications, using language, vision, and speech, as well as Search, which includes hybrid with semantic ranking for more precise information retrieval. Test your AI applications for quality and safety with built-in evaluation, and use a prompt flow tool for custom orchestration, as well as overarching controls with Responsible AI content filters for safety. Seth Juarez, Principal Program Manager for Azure AI, gives you an overview of Azure AI Studio. ► QUICK LINKS: 00:00 - Build your own copilots in Azure AI Studio 01:52 - Copilot app running as a chatbot| 03:53 - Retrieval augmented generation grounded on your data 04:54 - Experiment with prompts: Multi-modality 06:47 - Advanced capabilities: Prompt flow 08:58 - Ensure quality and safety of responses 10:09 - Wrap up ► Link References Start using Azure AI Studio today at https://ai.azure.com Check out our QuickStart guides at https://aka.ms/LearnAIStudio ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Jay Emery, director of technical sales & architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes. The complete show notes for this episode can be found at twimlai.com/go/657.
Earlier this year, Microsoft announced courseware and studying materials for Azure OpenAI. We look at this 1-day course and reflect on the content, who this course is best suited for, and what one should learn. Also, Jussi asks Tobi an unexpected question.(00:00) - Intro and catching up.(02:18) - Show content starts.Show links- Announcing Azure AI Studio preview (microsoft.com)- Azure AI Studio: ai.azure.com- Copilots: See them all in the Book of News- AI transformation and the technology driving change- What's new in Azure AI Platforms – Charting the Future with Innovative AI and ML- Security Adoption Framework- Cloud PKI- Windows App- Microsoft SSE- Give us feedback!
I 71. episode får vi besøg af Sebastian Mira Lindegaard, Head of ML, AI & Chatbot hos PensionDanmark. Vi taler med Sebastianom hans rejse og interesse for AI og Machine learning. Vi dykker også ned i Sebastian og hans teams arbejde med at implementere avancerede løsninger på tværs af PensionDanmark herunder værktøjet ”AI Assistenten” og en GenerativAI chatbot, som hver især hjælper rådgivere med at håndtere sager, besvare henvendelser og generelt forbedrer rådgivningsprocessen i PensionDanmark. Derudover dykker vi ned i teknologierne bag særligt AzureOpenAI, chatbots udvikling over årene og dets muligheder i dag samt vedligeholdelse og forbedring af bagvedliggende modeller. Shownotes: 00:00 - 10:52: Introduktion til Sebastian og PensionDanmark 10:52 – 40:49: Præsentation af løsninger (Next Best action & Conversational AI), Azure OpenAI, træning af modeller, data labeling og meget andet 40:49 – 47:42: Snak om vigtigheden af forretningsvendte personer i udviklingen og træning af modeller, ”AI trainers” og samarbejde med brugerne tidligt i processen Vært: Mathias Mengesha Emiliussen
Earlier this year, Microsoft announced courseware and studying materials for Azure OpenAI. We look at this 1-day course and reflect on the content, who this course is best suited for, and what one should learn. Also, Jussi asks Tobi an unexpected question.(00:00) - Intro and catching up.(02:02) - Community highlights.(04:42) - Show content starts.Community Highlights- Uli Homann, CVP & Distinguished Architect at Microsoft: Deepening Well-Architected guidance for workloads hosted on Azure (microsoft.com)Show links- Course details: AI-050- Course Details: AI-102- Request access to AOAI: Limited access to Azure OpenAI Service - Azure AI services- Give us feedback!
In this episode, Thomas Betts talks with Pamela Fox, a cloud advocate in Python at Microsoft. They discuss several ChatGPT sample apps that Pamela helps maintain. These include a very popular integration of ChatGPT with Azure OpenAI and Cognitive Search for querying enterprise data with a chat interface. Pamela also covers some best practices for getting started with ChatGPT apps. Read a transcript of this interview: https://bit.ly/47wmE9r Subscribe to the Software Architects' Newsletter [monthly]: www.infoq.com/software-architect…mpaign=architectnl Upcoming Events: QCon London https://qconlondon.com/ April 8-10, 2024 Follow InfoQ: - Mastodon: https://techhub.social/@infoq - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq Write for InfoQ - Join a community of experts. - Increase your visibility. - Grow your career. www.infoq.com/write-for-infoq/?u…aign=writeforinfoq
FULL SHOW NOTEShttps://podcast.nz365guy.com/497 Ever wondered about the vast universe of Power Virtual Agents and AI? Brace yourselves for an enlightening conversation with our guest, Dewain Robinson, straight from Nashville, Tennessee, the Principal Program Manager for Power, Virtual Agents, and Conversational AI at Microsoft. We dive into the intricate workings of Power Virtual Agents and how Azure OpenAI service can revolutionize data accessibility by creating an easy-to-navigate knowledge base. Our discourse traverses the democratization of data science and AI, revealing how co-pilot is opening new doors for people without coding backgrounds, and how large language models can extract knowledge from data. We navigate through the multifaceted world of Azure OpenAI, its significance and the necessity of recognizing the loopholes when training a model. Dwayne also shares insights on how Azure OpenAI in Microsoft Teams can make data access more efficient. As we advance, we tackle the challenges of using large language models and search engine optimization to help customers identify data issues. The importance of starting with public data before using internal data is emphasized, alongside the benefits of publishing content on a web page. We wind up with a sneak peek into the upcoming innovations with Azure Cognitive Services and their potential to create more powerful virtual agents and conversation AI. Prepare to be amazed by the technological advances that are just around the corner! OTHER RESOURCES:Power Platform/Power Users: https://powerusers.microsoft.com/t5/user/viewprofilepage/user-id/399912 GitHub: https://github.com/Dewain27AgileXRM AgileXRm - The integrated BPM for Microsoft Power Platform Register your interest for the 2024 90-Day Mentoring Challenge. ako.nz365guy.comSupport the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
In episode 165 of our SAP on Azure video podcast we talk about SAP HANA on Azure Large Instances will be retired by 30 June 2025 and the transition to Virtual Machines, Configuring Azure NetApp Files (ANF) Application Volume Group (AVG) for zonal SAP HANA deployment, Changes to the Azure reservation exchange policy, a look back at the ABAP Story, A Cool use of Open AI in Eclipse, Microsoft Business Applications Launch Event introduces wave of new AI-powered capabilities for Dynamics 365 and Power Platform and Azure OpenAI powered SAP Self-Services. Then we take a closer look at SDAF, or the SAP Deployment Automation Framework, which started as nice tool to simplify the deployment of SAP systems in a consistent and repeatable way. In the meantime SDAF has grown into a huge project, is leveraged by the Azure Center for SAP Solution and actually by lots of partners and customers. Kimmo Forss and Hemanth Damecharla show us the latest features, like the Configuration Editor and the integration in Azure DevOps Find all the links mentioned here: https://www.saponazurepodcast.de/episode165 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #SDAF #Infrastructure ## Summary created by AI Key Topics: * News from the weekend: The team shared some news about Azure VMs, reserved instances, exchange policy, and SAP on Azure video podcast. * Introduction of Kimmo and Hemanth: Kimmo and Hemanth are part of the SAP on Azure development team based in Helsinki, Finland, working on the SAP Deployment Automation Framework. * Overview of SAP Deployment Automation Framework: The framework is an open source tool that helps customers deploy and configure SAP systems on Azure using Terraform and Ansible, with a modular and extensible design. * New features and demos of the framework: The team showed some new features and demos of the framework, such as the configuration editor, the application service, the private DNS, the application manifest, and the software download. * Questions and feedback: The team answered some questions and feedback from the audience, such as the support for different SAP versions and platforms, the open source contribution model, the validation and testing process, and the deployment time and complexity.
I recently had the pleasure of sitting down with John Kelleher, head of the UK and Ireland business at UiPath. Our conversation navigated through the intricate maze of artificial intelligence and automation, focusing on UiPath's groundbreaking advancements in this space. If AI is the brain, then automation serves as the body that propels it into action, a sentiment that resonated throughout our discussion. UiPath made headlines today as they announced their latest AI-powered automation features, aimed at accelerating the discovery, automation, and scaling of business processes. Among the notable offerings are Generative AI and Specialised AI capabilities, including OpenAI and Azure OpenAI connectors with support for the GPT-4 model. These tools are far from mere bells and whistles; they have practical applications such as drafting responses to customer inquiries and summarizing extensive documents into key points. However, the innovation doesn't stop there. Another feature, Clipboard AI for Finance, employs UiPath's Computer Vision and Generative AI to transfer data between documents, spreadsheets, and applications. This facilitates a more streamlined flow of information, helping businesses operate with increased efficiency and reduced error. Additionally, UiPath's Communications Mining and Document Understanding now harness GPT models to bolster their capabilities. Beyond these technical details lies a more profound value proposition: the undeniable impact of AI-powered automation on productivity and talent retention. John and I explored how the amalgamation of AI and automation does more than simply improve efficiency. It elevates the quality of work, thereby attracting top-tier talent and offering organizations a unique competitive edge across industries such as healthcare, manufacturing, and financial services. Of course, one cannot discuss the rise of AI and automation without addressing the elephant in the room: the societal fears surrounding job losses. However, both John and I agreed that these fears, though valid, are not unprecedented. From the invention of the printing press to the advent of the internet, every technological leap has induced concerns about job displacement. Yet history has consistently shown that technology more often creates new opportunities than it takes away. Our conversation also touched upon the relationship between specialized and generative AI. While specialized AI offers solutions to specific problems, generative AI provides a broader range of possibilities. This duality allows businesses to tackle unique challenges with a balanced blend of flexibility and precision. As we look towards the future, it becomes increasingly apparent that a new generation of workers will soon make AI and automation non-negotiable norms in the workplace. This younger workforce isn't merely open to the idea of AI; they expect it, thereby accelerating its adoption across multiple sectors. We wrapped up our engaging conversation by discussing the implications of AI on sectors like manufacturing, healthcare, and financial services, acknowledging that regulation must be nuanced and sector-specific. A one-size-fits-all regulatory approach will hardly suffice given the unique challenges and opportunities presented by AI in different industries. The pinnacle of our discussion was the real-world examples provided by John, which transformed our theoretical talk into a practical dialogue. UiPath's new capabilities are not just conceptual; they offer real tools that businesses can leverage to address targeted challenges through AI-powered automation. In summary, today's discussion underscored the fact that the union of AI and automation is catalyzing unprecedented opportunities and efficiencies. Yet, like any significant technological advance, it comes bundled with its challenges and fears. The latest offerings from UiPath symbolize not merely an advancement in technology but a stride towards a more automated, intelligent, and, indeed, promising future.
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, we delve into the latest breakthrough in AI integration, where ChatGPT joins forces with Microsoft's Azure OpenAI service. Discover how this collaboration is empowering developers to infuse AI into their applications, opening up new horizons in automation and personalization. Join us as we explore the implications and possibilities of this innovative integration. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai
Microsoft for Startups works with organizations to enable their success. This episode visits the programs available for founders and startups. In addition, the conversation includes how startups are leveraging AI and moving the state-of-the-art forward.GuestsHans Yang is General Manager Microsoft for Startups. He looks for new programs and platforms to enable Microsoft to help startups. Follow Hans on LinkedIn. Rob Ferguson is head of AI for Startups for Microsoft. As such, he gets to work with some amazing partners and technologies, many of whom are driving true innovation in the AI space. Follow Rob on LinkedIn.Show linksMicrosoft for Startups homeFounders HubPegasus ProgramTechnical advisory for founders from Microsoft expertsMicrosoft for Startups BlogAzure OpenAI ServiceHostsPaul Maher is General Manager of the Commercial Marketplace Service Team at Microsoft. Follow him on LinkedIn andTwitter.David Starr is a Principal Software Development Engineer in the Commercial Marketplace Services Team at Microsoft. Follow him on LinkedIn.
Domino's Pizza announced last week that the company is partnering with Microsoft Cloud and Azure OpenAI Service to create a generative AI assistant that can help improve both employee and customer service. While specific details of the five-year project will be announced over the next six to 18 months, Kelly Garcia, Domino's executive vice president and chief technology officer, was able to provide some insight on how the AI project will help improve employees' jobs and personalize customer experience in the near future.Garcia said that the AI technology will be a mix of proprietary and non-proprietary technology in partnership with Microsoft, in which the company's own personalized, in-house technology will be overlaid on top of the Microsoft Cloud and Azure OpenAI interface. The project will begin, he said, with the relaunch of the Domino's website, which will be used to kickstart the improvement of the customer personalization experience, with the help of AI.
In this episode, we talk about the lessons we've learned in the past week. Three lessons to share, on Microsoft Fabric, Entra ID and Azure OpenAI. Also, Tobi asks Jussi an unexpected question.(00:00) - Intro and catching up.(02:54) - Community highlights.(04:56) - Show content starts. Community Highlights- Valentina Alto: Generating applications from sketches with LLMs- Edi Wang: Deploy ChatGPT Next Web to Azure Container Apps with Individual Account Login in 3 minutesShow links- Microsoft Fabric Licensing Guide from Reza Rad- Entra ID External ID Public Preview- Give us feedback!
In episode 162 of our SAP on Azure video podcast we talk about RISE into the Future with SAP, SAP announcing new generative AI assistant Joule and chatBASF. Then we go back to the AI SDK for SAP. A few weeks ago we already had Gopal joining us where he introduced us to the AI SDK for ABAP. As promised we wanted to take a deep dive on this topic. Gopal has brought some cool examples on how you can use Azure OpenAI as an ABAP developer. https://microsoft.github.io/aisdkforsapabap/ Find all the links mentioned here: https://www.saponazurepodcast.de/episode162 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #AzureAI #OpenAI #AI #ABAP ## Summary created by AI Key Topics: * Introduction: Holger and Gopal introduce themselves and the topic of the SDK for ABAP. * News: Holger shares three news items about SAP events, products, and chatbot scenarios. * SDK for ABAP demo: Gopal shows how to install, configure, and use the SDK for ABAP to create a simple program that interacts with OpenAI and generates text based on prompts. * SDK for ABAP features: Gopal explains some of the enterprise-ready capabilities of the SDK for ABAP, such as data policies, profile settings, and granular control.
Combine Azure Communication Services, the same platform that runs Microsoft Teams, with the Azure OpenAI service for generative AI using GPT. Automate and transform your customer service interactions with faster and informed human-like responses, whether text-based through BOTs or through integrated voice channels. Provide a seamless escalation path for agents, with the context and precise information they need to rapidly and effectively respond to escalations. Bob Serr, Azure Communication Services VP, joins Jeremy Chapman to share how to build GPT-automated customer support with Azure Communication Services. ► QUICK LINKS: 00:00 - Combine Azure AI and Azure Communication Services 01:02 - What is Azure Communication Services? 02:20 - Developer advantages 03:22 - Demo- customer experience 06:32 - Demo- technician experience 08:16 - See how it works behind the scenes 10:00 - How to get it up and running 12:18 - Wrap up ► Link References Get core services up and running at https://aka.ms/ACSAIsampleapp For more information, check out https://aka.ms/ACSdocs ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
In episode 161 of our SAP on Azure video podcast we talk about the DSAG Jahreskongress 2023, SAP Business One with Power Platform and announcing Microsoft Copilot! Then we switch topics: 5 months ago we had a very interesting episode where CJ from our Team in Asia had developed a simple, but amazing scenario on how to leverage Azure OpenAI, Teams and generating SQL queries to access data from SAP. After his session we had Michael Mergell and our intern Noopur Vaishnav joining as well to talk about similar scenarios. Today CJ joins us again to provide an update on the latest integrations of Teams, SAP and AI. https://github.com/cjpark-sapcsa/chatgpt-sap-aoai Find all the links mentioned here: https://www.saponazurepodcast.de/episode161 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #AzureOpenAI
AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
In this episode, we unveil the exciting news of ChatGPT's integration into Microsoft's Azure OpenAI service. Explore the implications of this collaboration, from enhanced accessibility to innovative applications across industries. Join us for an insightful discussion on how ChatGPT's presence within the Azure ecosystem is set to revolutionize the AI landscape and drive transformative change. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai
Stop by this episode to see and hear what Angelica Faber, Security Architect at Microsoft, has been working on. Angelica has produced some great content and guidance using Azure OpenAI with Microsoft Sentinel to provide better efficiency and deeper knowledge for Security Operations teams. Show Notes/Links: Angelica's blog: https://myfabersecurity.com/ Angelica on LinkedIn: https://www.linkedin.com/in/angelica-faber/ Rubrick: https://www.rubrik.com/ Microsoft Envision The Tour: https://envision.microsoft.com/ Microsoft Sentinel Triage AssistanT (STAT): https://github.com/briandelmsft/SentinelAutomationModules This is a demo-heavy episode. Catch the full experience with the live show video replay…
In this episode, we talk about Microsoft Entra Global Secure Access. What is it? Why would you need it? We dissect the services and talk about the capabilities and how they map back to services we've had before. Also, Jussi asks Tobi an unexpected question.(00:00) - Intro and catching up.(05:01) - Community highlights.(06:10) - Show content starts. Community Highlights- Ronak Chokshi: Introducing a new Azure AI Language video series showcasing features powered by Azure OpenAI & more- Saul Dolgin: Introducing the Azure Business Continuity GuideShow links- What is Microsoft Entra Global Secure Access?- Give us feedback!SPONSORThis episode is sponsored by Sovelto. Stay ahead of the game and advance your career with continuous learning opportunities for Azure Cloud professionals. Sovelto Eduhouse – Learning as a Lifestyle - Start Your Journey now: https://www.eduhouse.fi/cloudpro
In episode 156 of our SAP on Azure video podcast we talk about SAP on Azure NetApp Files Sizing Best Practices, SAP HANA Azure virtual machine storage configurations, NFS v4.1 volumes on Azure NetApp Files for SAP HANA, Announcing public preview of new Mv3 Medium Memory Virtual Machines, the SAP CDC Connector on Azure and Part 2 of the SAP S/4HANA Cloud ABAP Environment integration journey with Microsoft blog post, this time about Azure OpenAI & AI SDK for ABAP. Then we take a closer look at the Microsoft AI SDK for ABAP. A few months ago, we talked about a revolutionary new SDK in our news of the week section: the AI SDK for ABAP. This SDK empowers ABAP developers to consume Azure OpenAI services without leaving their ABAP code or having to know the intricate details of Azure OpenAI. I had the opportunity to present and demo the SDK with the German Speaking SAP User Group a few months back, which led to some exciting discussions with customers and actual follow-up projects. Today, we are thrilled to have the brain behind the AI SDK for ABAP with us: Gopal Nair, a Principal Software Engineer at Microsoft. We're really looking forward to Gopal introducing the topic and sharing some examples with us. In fact, he has so many examples that we're planning to do a few follow-up sessions with him, so stay tuned! (this intro was written by Azure OpenAI) Find all the links mentioned here: https://www.saponazurepodcast.de/episode156 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #AzureOpenAI #OpenAI #ABAP #abapGit
MLOps Coffee Sessions #171 with Thibaut Labarre, Using Large Language Models at AngelList co-hosted by Ryan Russon. We are now accepting talk proposals for our next LLM in Production virtual conference on October 3rd. Apply to speak here: https://go.mlops.community/NSAX1O // Abstract Thibaut innovatively addressed previous system constraints, achieving scalability and cost efficiency. Leveraging AngelList investing and natural language processing expertise, they refined news article classification for investor dashboards. Central is their groundbreaking platform, AngelList Relay, automating parsing and offering vital insights to investors. Amid challenges like Azure OpenAI collaboration and rate limit solutions, Thibaut reflects candidly. The narrative highlights prompt engineering's strategic importance and empowering domain experts for ongoing advancement. // Bio Thibaut LaBarre is an engineering lead with a background in Natural Language Processing (NLP). Currently, Thibaut focuses on unlocking the potential of Large Language Model (LLM) technology at AngelList, enabling everyone within the organization to become prompt engineers on a quest to streamline and automate the infrastructure for Venture Capital. Prior to that, Thibaut began his journey at Amazon as an intern where he built Heartbeat, a state-of-the-art NLP tool that consolidates millions of data points from various feedback sources, such as product reviews, customer contacts, and social media, to provide valuable insights to global product teams. Over the span of seven years, he expanded his internship project into an organization of 20 engineers. He received a M.S. in Computational Linguistics from the University of Washington. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.angellist.com/venture/relay --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Ryan on LinkedIn: https://www.linkedin.com/in/ryanrusson/ Connect with Thibaut on LinkedIn: https://www.linkedin.com/in/thibautlabarre/
Chris Seferlis joins me in the virtual studio to discuss the 2023 Tech Conference season... Be sure to grab Chris' book:Practical Guide to Azure Cognitive Services: Leverage the power of Azure OpenAI to optimize operations, reduce costs, and deliver cutting-edge AI solutionsYou can get the book on Amazon Here: https://aka.s4nets.cloud/CSBOOK2023This insightful release goes into areas like barriers to entry, cost control, AI for manufacturing and logistics, and much more.Comment on Twitter or LinkedIn and tell us which area you want to see a deep dive!Stalk Chris online at the following outlets:Follow Chris's YouTube channel HERE!Chris is on Twitter HERE!Follow Michael AskinsYouTubeTwitterLinkedInWe want to thank our anchor sponsor, solutions4networks, which makes this show possible! Visit s4nets to gain insight and information on services like Network Security, Cloud Security, Unified Communications and Collaboration, Datacenter Technologies, and Microsoft Cloud Technologies.Produced by Michael Askins and MA-ITPRO
In this episode of the Infrastructure Matters Podcast, Camberley Bates and Krista Macomber discuss some news items below. They then highlight the need for a comprehensive approach to cyber resiliency that involves collaboration between various teams and continuous training to stay ahead of evolving cyber threats, and they touch on implications for technology and IT infrastructure. Topics include: Veeam receives DoDIN APL certification IBM launches the FlashSystem 5045 Rubrik partners with Microsoft for Sentinel SIEM, Azure OpenAI integration The Futurum Group's upcoming participation in the Flash Memory Summit Comprehensive cyber-resiliency What exactly IS comprehensive cyber-resiliency? General considerations and best practices from a technology standpoint, including touching on: Data protection Data management Automation The CISO perspective, based on a recent Futurum Group research study End-user training
In episode 153 of our SAP on Azure video podcast we talk about Azure Migrate Windows Server upgrade, Protect Azure workloads with VM level consistency using Agentless Crash-Consistent Restore Points, Azure API Management and how to work with OData services and Microsoft Sentinel's Impact - Investigating a SAP Breach. Then we go deep with Harutyun Ter-Minasyan from SAP and Martin Pankraz on the latest news with SAP Private Link Services. Azure OpenAI is now also available and the connection via SAP Private Link offer some unique integration options. Harut and Martin show a nice scenario evaluting BTP Audit-Log files using Azure OpenAI. Find all the links mentioned here: https://www.saponazurepodcast.de/episode153 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #BTP #CAP #AzureOpenAI
In episode 150 of our SAP on Azure video podcast we talk about Premiums SSD v2, Azure NetApp Files double encryption at rest, Reference architectures with SAP AI Core service on SAP BTP with Azure OpenAI, Reducing your CO2 footprint using a smart Generative AI application on SAP BTP, the reCAP Hackathon - CAP & Azure Cosmos DB, Part 1 of the SAP S/4HANA Cloud ABAP Environment integration journey with Microsoft and new playbooks for Microsoft Sentinel for SAP. Then we take a deep dive into Brute Force and DDoS attacks protection with Evren Buyruk & Amir Dahan. They are talking about security management with Azure services and leveraging Azure DDoS protection services to help with your SAP and non-SAP workload on Azure. https://www.saponazurepodcast.de/episode150 Reach out to us for any feedback / questions: * Robert Boban: https://www.linkedin.com/in/rboban/ * Goran Condric: https://www.linkedin.com/in/gorancondric/ * Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #Security #Sentinel #DDoS
AI, MFA en Viva Pulse Meer mogelijkheden om aan de slag te gaan met Azure openAI met je eigen data en een ander lang verwachte Azure DevOps feature die sinds 2019 al open staat. Daarnaast hebben we het over nieuwe manieren om je MFA inrichting nog meer secure te maken. --------- Presentatie: Barbara Forbes & Jos van Schouten Productie / edit: Ron van der Zijden Powered by OGD ict-diensten ogd.nl Benieuwd naar werken bij OGD? werkenbij.ogd.nl --------- Public Preview: Azure OpenAI Service On Your Data https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/introducing-azure-openai-service-on-your-data-in-public-preview/ba-p/3847000 Viva Pulse https://techcommunity.microsoft.com/t5/microsoft-viva-blog/microsoft-viva-pulse-available-for-public-preview/ba-p/3838338 Azure Devops organizational icon https://devblogs.microsoft.com/devops/choose-an-image-for-your-organization/ System preferred MFA https://app.cloudscout.one/evergreen-item/mc565271/
In this episode we talk about prompt engineering in the context of ChatGPT and Azure OpenAI. What makes a good prompt? What aspects can you consider and use when crafting your prompts? And why should you care? Also, Jussi asks Tobi an unexpected question.(00:00) - Intro and catching up.(02:42) - Community highlights.(04:00) - Show content starts. Community Highlights- Morten Knudsen: Orphaned Azure Security Principals Clean-up & Azure Policy Managed Identity Role Assignment Automation- Zachary Cavanell: What runs ChatGPT? Inside Microsoft's AI supercomputer | Featuring Mark RussinovichShow links- Prompt engineering techniques with Azure OpenAI- Give us feedback!SPONSORThis episode is sponsored by Sovelto. Stay ahead of the game and advance your career with continuous learning opportunities for Azure Cloud professionals. Sovelto Eduhouse – Learning as a Lifestyle - Start Your Journey now: https://www.eduhouse.fi/cloudpro
Rejoignez-nous pour ce nouvel épisode du podcast Devoteam M Cloud, dédié à la démystification de l'intelligence artificielle (IA) et à l'exploration de ses cas d'utilisation commerciaux. Vous aurez la chance d'écouter Laurent Letourmy, Head of Data France chez Devoteam et nos experts Maha Gouiiaa et Dimitri Cabaud, Squad Leaders Data chez Devoteam M Cloud. Ils abordent le paysage des technologies IA, les tendances actuelles du marché et la manière dont ces innovations transforment le monde des affaires.Découvrez les caractéristiques et les applications concrètes de l'IA, avec une attention particulière portée à Microsoft Build et OpenAI. Nous discutons des démarches à suivre pour l'adoption de l'IA au sein de l'entreprise, soulignant l'importance de l'appropriation, de la formation, et de la gouvernance des services pour une utilisation optimale de l'IA.Nos experts partagent des exemples de projets réalisés par Devoteam, allant de la gestion de la qualité logicielle à la supervision des API, en passant par l'automatisation de la gestion des logs et la conversion du code. Ils mettent également en avant l'impact de l'IA sur l'évolution rapide des entreprises et les défis de sa gestion technique.Pour conclure, Christophe Mottier, notre animateur et Team Lead Modern Work & Security chez Devoteam M Cloud, renvoie à l'épisode 5 du podcast La Pause M Cloud, pour approfondir les aspects de sécurité dans Azure OpenAI.
In this episode, we catch up with friend of the show, Rin Ure, about his new role at Microsoft and how he sees AI changing the way SOCs operate. Rin runs the Cyber Defense Operations Center One Cloud SOC Triage and Analysis team in the US. They are the team that handles the triage and analysis SOC requests for Microsoft, it's services and for their Cloud and AI customers. Show Links: Weekly OpenAI Newsletter: https://rodtrent.com/jtl Azure OpenAI community on LinkedIn: https://rodtrent.com/65g Microsoft Cyber Defense Operations Center (CDOC): https://rodtrent.com/594 Microsoft Security Copilot: https://rodtrent.com/6pt Microsoft Corporate, External, and Legal Affairs (CELA): https://rodtrent.com/hdy Pluralsight AI learning: https://rodtrent.com/3i5 SANS (SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals): https://rodtrent.com/1i3 Microsoft Security Insights Discord Server: https://discord.gg/2ktJHTrSAt
In episode 147 of our SAP on Azure video podcast we talk about BTP ABAP Environment / Steampunk on Azure, additional tutorials about leveraging Steampunk on Azure with other Microsoft services, Integration 2023, new BTP Private Link services for Azure OpenAI, SAP AI Built for Business and Microsoft Copilot. Then Abbas and Momin join us to talk about HANA scale-out system and different configuration scenarios. They walk us through the configuration and show us the results live in Azure. In case you want to tase a delicious chicken, check out this recipe from Momin: Easy Tandoori chicken Purchase Tandoori mix from your local Indian Grocery store. Mix the Tandoori mix, with chicken thighs and legs, have some good background music to inspire you
Dans ce nouvel épisode de notre podcast La Pause M Cloud, Christophe Mottier, Team Lead Modern Work & Security, vous guide dans une exploration détaillée des enjeux de sécurité associés à Azure OpenAI. Nous accueillons comme invités Dimitri Cabaud, Squad Leader Data, et Aurelien Ledoux, Team Lead Endpoint & Security, deux experts prêts à partager leurs précieuses connaissances et expériences.Nous commençons par une introduction à Azure OpenAI et ses fonctionnalités principales, en expliquant comment ce service se distingue des autres solutions d'intelligence artificielle disponibles sur le marché. Nous plongeons ensuite au cœur des enjeux de sécurité liés à Azure OpenAI, discutant des meilleures pratiques à adopter pour garantir une utilisation sécurisée de cette technologie.Dans notre section "Deep Dive", nous examinons de plus près les différents types de données traitées par Azure OpenAI et discutons des stratégies pour prévenir les abus, filtrer le contenu et surveiller les activités. Nous abordons également les prérequis pour les clients qui souhaitent se lancer dans l'utilisation d'Azure OpenAI, y compris par où commencer et quels sont les coûts associés.Pour conclure cet épisode, nous résumons les points clés de notre discussion et rappelons l'importance de la sécurité dans le contexte de l'utilisation d'Azure OpenAI. Ne manquez pas notre prochain épisode où nous explorerons des cas métiers concrets autour d'Azure OpenAI.
Chris Seferlis joins me in the virtual studio to discuss the new book he coauthored:Practical Guide to Azure Cognitive Services: Leverage the power of Azure OpenAI to optimize operations, reduce costs, and deliver cutting-edge AI solutionsYou can get the book on Amazon Here: https://aka.s4nets.cloud/CSBOOK2023This insightful release goes into areas like barriers to entry, cost control, AI for manufacturing and logistics, and much more.Comment on Twitter or LinkedIn and tell us which area you want to see a deep dive!Stalk Chris online at the following outlets:Follow Chris's YouTube channel HERE!Chris is on Twitter HERE!Follow Michael AskinsYouTubeTwitterLinkedInWe want to thank our anchor sponsor, solutions4networks, which makes this show possible! Visit s4nets to gain insight and information on services like Network Security, Cloud Security, Unified Communications and Collaboration, Datacenter Technologies, and Microsoft Cloud Technologies.Produced by Michael Askins and MA-ITPRO
Sztuczna inteligencja to rewolucja, ale czy dotyczy to także organizacji społecznych? Jak mogą wykorzystać ChatGPT i skorzystać z jego umiejętności? Umiejętności obsługi nowych rozwiązań mogą się przydać przy budowaniu i zmienianiu świata. Musimy jednak te rozwiązania rozumieć, testować i wdrażać. Zapraszamy na wstęp do AI dla organizacji społecznych. Nasi goście: Pamela Krzypkowska - Architektka Danych i Sztucznej inteligencji w Microsoft, pracująca z klientami rynku Enterprise nad flagowymi projektami w tych obszarach w Polsce. Zajmuje się głównie operacjonalizacją procesów uczenia maszynowego (MLOps), budowaniem modeli uczenia maszynowego na platformach Databricks i Azure ML oraz pracą z modelami generatywnymi w Azure OpenAI. Jest członkinią grup ds. Etyki i Cyberbezpieczeństwa AI działających przy KPRM oraz wykładowczynią na Akademii Leona Koźmińskiego i Politechnice Warszawskiej. Jej główne obszary zainteresowań to sieci neuronowe, etyka AI i filozofia umysłu. Ukończyła Informatykę na Politechnice Warszawskiej oraz Filozofię na Uniwersytecie Warszawskim. Jacek Królikowski - Prezes Fundacji Rozwoju Społeczeństwa Informacyjnego, specjalizuje się w metodologii szkoleń i zarządzania projektami. Od 2008 roku kierował działaniami szkoleniowymi realizowanymi przez FRSI w ramach Programu Rozwoju Bibliotek. Ma doświadczenie pracy w instytucjach publicznych (Centralny Ośrodek Doskonalenia Nauczycieli) i organizacjach pozarządowych (Fundacja Rozwoju Demokracji Lokalnej). Jako ekspert Rady Europy uczestniczył w programach realizowanych przez tę organizację w latach 1995 – 2005, głównie na Bałkanach. Od 2005 działał jako niezależny ekspert, świadcząc usługi w zakresie analiz i badań społecznych, opracowywania projektów, szkoleń i zarządzania projektami dla firm, organizacji pozarządowych i instytucji publicznych. W latach 2006 – 2008 zarządzał dużymi projektami szkoleniowymi realizowanymi przez UNDP i Delegację Komisji Europejskiej dla służby cywilnej. Czego dowiesz się z tego odcinka: 01:35 Czy AI jest, a czym nie jest? 04:04 Jak można wykorzystywać AI w swojej pracy? 08:19 Kiedy powinniśmy być wyjątkowo czujni pracując z AI? 14:34 Czy w przyszłości będziemy mieć swoje własne rozwiązania z AI? Nagranie całej rozmowy podczas Live Sektora 3.0 do obejrzenia na naszym kanale w serwisie YouTube. Polecane narzędzia: https://openai.com/blog/chatgpt https://azure.microsoft.com/ Transkrypcję odcinka znajdziecie TUTAJ.
OpenAi, Azure AI, Azure OpenAI… oh my! While Microsoft is a huge investor in the OpenAI company, it isn't a new venture for Microsoft. The Microsoft cloud has plenty of AI features that were accessible to customers prior to the OpenAI investment and partnership. This week's guest, Nick Brady, Senior Program Manager of Azure AI, is an expert in Microsoft AI and is going to break down the nuances between Azure OpenAI and OpenAI. Episode topics:What is Azure OpenAIHow is Azure OpenAI different from Azure OpenAI?How do you see Azure OpenAI being beneficial for Microsoft customers?Can you share a customer success story with Azure OpenAI? Useful Resources:What is Azure OpenAI Service? - Azure Cognitive Services | Microsoft LearnAbout (openai.com) About Nick BradyNick Brady is a technology professional in artificial intelligence (AI). He received his Bachelor's and Master's of Science from Purdue University's Knoy School of Technology, where he studied Management and Computer Science. Currently serving as a Senior Program Manager within Azure AI Product Engineering, focusing on the Azure OpenAI Service. He has spent his career in the tech industry on emerging technologies and has focused on AI since joining Microsoft in 2018. Nick regularly meets with Microsoft's most valuable and strategic customers and partners worldwide to help them understand and realize the full potential of AI and sharing his insights and experience in the latest trends and developments in the AI space. His expertise lies in helping organizations leverage AI to drive business value and innovation as well as its impact on business and society. He is also a recipient of the Distinguished Speaker Award in 2020 from the Microsoft Redmond Executive Briefing Center. Connect with Nick here - Nick Brady | LinkedIn We'd love to hear from you:Don't hesitate to reach out with any questions, comments, suggestions or feedback! We'd love to hear from you. Send your hosts an email at digestibledynamics@microsoft.com Discover and follow other Microsoft podcasts at microsoft.com/podcasts Hosted on Acast. See acast.com/privacy for more information.
We take a frequent look at the recent Azure Updates. This week we found updates on Defender for Cloud, Windows Server Azure Edition, Azure OpenAI, and Logic Apps, among others. Also, Tobi asks Jussi an unexpected question.(00:00) - Intro and catching up.(02:38) - Show content starts.Show links- Hotpatch for Windows Server VMs- Azure DevOps 2023 Q1- GPT-4 in Azure OpenAI- Azure Logic Apps Data Mapper for VS Code- 30 days of Azure AISPONSOR This episode is sponsored by Sovelto. Stay ahead of the game and advance your career with continuous learning opportunities for Azure Cloud professionals. Sovelto Eduhouse – Learning as a Lifestyle - Start Your Journey now: https://www.eduhouse.fi/cloudpro
Bring OpenAI's ChatGPT model in Azure to your own enterprise-grade app experiences with precise control over the knowledge base, for in-context and relevant responses. Interact with your organization's private internal data, while respecting the information protection controls put in place. Azure OpenAI service is combined with Azure Cognitive Search to index and retrieve data that is private and external to the ChatGPT large language model. The retrieval step in Azure Cognitive Search finds the most relevant pieces of information and presents the top ranked results to the language model. And because the knowledge lives outside of the ChatGPT model, you're in control—it's not used to train the model. Microsoft Distinguished Engineer, Pablo Castro, joins Jeremy Chapman to show how it works. ► QUICK LINKS: 00:00 - Introduction 01:29 - Apply ChatGPT to enterprise apps using Azure 03:40 - Demo: Typical app experience 05:45 - How ChatGPT generates a response 07:55 - Experiment with prompts 09:38 - How information protection works 11:03 - Process for adding new information 12:01 - Code behind the sample app 15:00 - Wrap up ► Link References: Watch our OpenAI fundamentals show at https://aka.ms/OpenAIMechanics Try out the sample app on GitHub at https://aka.ms/entGPTsearch More on Azure Open AI service at https://aka.ms/azure-openai Check out Azure Cognitive Search at https://aka.ms/azsearch ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
In this week's episode we're diving head first into Azure OpenAI. Let's talk about the service, how it relates to OpenAI and ChatGPT, what models to use, how and why. What does it cost? And will this change everything? Also, Jussi asks Tobi an unexpected question.(00:00) - Intro and catching up.(02:50) - Show content starts.Show links- How to get access to Azure OpenAI?- A practical look at Azure OpenAI (Jussi)SPONSORThis episode is sponsored by Sovelto. Stay ahead of the game and advance your career with continuous learning opportunities for Azure Cloud professionals. Sovelto Eduhouse – Learning as a Lifestyle - Start Your Journey now: https://www.eduhouse.fi/cloudpro
Deze aflevering hebben we het over; ChatGPT in Azure OpenAI, ondersteuning van Microsoft Purview Information Protection in Acrobat, Azure vaulted backup en een hele handige toevoeging aan meeting recordings in Teams. --------- Presentatie: Barbara Forbes & Jos van Schouten Productie / edit: Ron van der Zijden Powered by OGD ict-diensten ogd.nl Benieuwd naar werken bij OGD? werkenbij.ogd.nl --------- ChatGPT in Azure GA: https://azure.microsoft.com/en-us/blog/chatgpt-is-now-available-in-azure-openai-service/ Purview in adobe acrobat https://www.microsoft.com/en-us/security/blog/2023/03/07/get-integrated-microsoft-purview-information-protection-in-adobe-acrobat-now-available/ Azure vaulted backup voor storage account blob + files: https://azure.microsoft.com/en-us/updates/azureblobvaultedbackups/ Microsoft Teams - Explicit Recording Consent for Teams Meetings: https://www.microsoft.com/nl-nl/microsoft-365/roadmap?rtc=1&searchterms=107781&filters=&searchterms=107781 https://m365admin.handsontek.net/microsoft-teams-explicit-recording-consent-for-teams-meetings-2/
Vandaag weer allemaal toffe nieuwe dingen die General Available zijn of in preview o.a. Azure open AI en Cross tentant Sync in Azure AD! Maar het beste nieuws van allemaal volgt op het einde! Presentatie: Barbara Forbes & Jos van Schouten Productie / edit: Nils Bloem Powered by OGD ict-diensten https://www.ogd.nl/ Benieuwd naar werken bij OGD? https://werkenbij.ogd.nl/ Azure OpenAI services zijn GA: https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/ Cross tenant synchronisation: https://www.microsoft.com/en-US/microsoft-365/roadmap?filters=&searchterms=109568 https://learn.microsoft.com/en-us/azure/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview Classic VM retirement uitgesteld van 1 maart naar 1 september: https://azure.microsoft.com/en-us/updates/classicvmretirment/
Oggi vi parlo del servizio Azure OpenAI, che da poco è stato annunciato come GA.Ho avuto la fortuna di poterlo provare, ed oggi ve ne parlo.- Azure OpenAI: il futuro dell'intelligenza artificiale https://www.youtube.com/watch?v=1Qu0UCh8CIk- Form da compilare per richiedere l'accesso ad Azure OpenAI https://aka.ms/oai/access
On this episode of The Cloud Pod, the team sits to talk about AWS's new patching policies, the general availability of Azure OpenAI, and the role of addressing IM or access management challenges in ensuring the seamless transition to the Cloud. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Stai ascoltando un estratto gratuito di Ninja PRO, la selezione quotidiana di notizie per i professionisti del digital business. Con Ninja PRO puoi avere ogni giorno marketing insight, social media update, tech news, business events e una selezione di articoli di approfondimento dagli esperti della Redazione Ninja. Vai su www.ninja.it/ninjapro per abbonarti al servizio.Microsoft sta lanciando il servizio Azure OpenAI. Lo strumento consente alle aziende di integrare tool come DALL-E nelle proprie applicazioni cloud. Presto includerà l'accesso a ChatGPT, l'intelligenza artificiale conversazionale recentemente diventata virale. Si tratta del primo passo di un'estensione dell'AI a tutti i prodotti Microsoft, già annunciata dal CEO Satya Nadella. YouTube testa lo streaming gratuito dei canali TV. La società sarebbe in trattativa con varie case di produzione e aziende con l'obiettivo di portare programmi, film e intere emittenti sulla sua piattaforma, con un nuovo servizio gratuito, supportato dalla pubblicità. La novità è in fase di sperimentazione per un numero ristretto di utenti come confermato da un portavoce di YouTube al WSJ. Getty Images cita in giudizio i creatori dello strumento artistico AI Stable Diffusion. Si tratta di un'escalation significativa nelle battaglie legali in corso tra i creatori di contenuti e gli innovatori tecnologici dell'AI generativa. L'azienda di foto stock ha dichiarato di ritenere che Stability AI abbia "copiato ed elaborato illegalmente milioni di immagini protette dal diritto d'autore" per addestrare il suo software.
For your next application, leverage large-scale, generative AI models with a deep understanding of language and code, using Azure's OpenAI service. Interact with models using natural language, prompts, and few-shot learning. Use the Azure OpenAI Studio to experiment and test your models before bringing them into your code to deliver differentiated app experiences, all with Azure's enterprise-grade security built-in. Build new experiences with models: GPT-3 generates content based on natural language input Codex translates natural language instructions directly into code DALL-E 2 generates realistic images and art from natural language descriptions Pablo Castro, Distinguished Engineer and part of the Azure AI team, joins Jeremy Chapman for an in-depth look at Azure's OpenAI service. ► QUICK LINKS: 00:00 - Introduction 01:06 - Azure's OpenAI service 02:44 - Practical examples of OpenAI 05:23 - Integrate OpenAI models into everyday apps 09:32 - Building a custom app from scratch 11:57 - The OpenAI Studio in Azure 13:41 - The Playground 16:02 - Wrap up ► Link References: Check out Responsible AI principles at https://aka.ms/AIprinciples Start with Designer preview at https://designer.microsoft.com Sign up for Azure OpenAI at https://aka.ms/oai/access ► Unfamiliar with Microsoft Mechanics? As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. • Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries • Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog • Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/website • To get the newest tech for IT in your inbox, subscribe to our newsletter: https://www.getrevue.co/profile/msftmechanics ► Keep getting this insider knowledge, join us on social: • Follow us on Twitter: https://twitter.com/MSFTMechanics • Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ • Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ • Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Under ignite presenterade Microsoft massor med funktioner baserade på OpenAI. I veckans avsnitt pratar vi med Peter Örneholm från Active Solution som grävt ner sig i allt från att generera text med GPT3, skapa unika bilder med DALL-E 2 eller hur man hanterar kod med GitHub Co-pilot. Vi kommer även in på saker som ansvarsfull AI, användningsområden för modern AI och varför alla lärare borde vara högst misstänksamma mot framtida bokrecensioner.OpenAIAzure OpenAI Service – Avancerade språkmodeller | Microsoft AzureWhat is Azure OpenAI? (Preview) - Azure Cognitive Services | Microsoft Learn Hosted on Acast. See acast.com/privacy for more information.