POPULARITY
This is the Catchup on 3 Things by The Indian Express and I'm Flora Swain.Today is the 23rd of December and here are the headlines.Prime Minister Narendra Modi hailed the central government's efforts to provide ‘lakhs of government jobs in the last 1.5 years'. Addressing a Rozgar Mela virtually today, PM Modi said that his government set a “record” by giving permanent government jobs to almost 10 lakh people in the course of the last 18 months. PM Modi stated, quote, “There is a campaign going on to provide government jobs in various ministries, departments and institutions of the country. Today also, more than 71,000 youths have been given appointment letters,” unquote.Meanwhile, Three members of the Khalistan Zindabad Force, who were allegedly involved in grenade attacks at police establishments in border areas of Punjab, were killed in an encounter in Uttar Pradesh's Pilibhit district today. The encounter was jointly conducted by the police forces from Punjab and UP. While the Punjab Police said in the morning that the men had been arrested, police in UP confirmed later that the men had died a little before 10 am. The deceased have been identified as Gurvinder Singh, Virendra Singh alias Ravi, and Jasan Preet Singh alias Pratap Singh, all residents of Gurdaspur.Chief Minister Himanta Biswa Sarma said today that six Bangladeshis have been apprehended by the Assam Police for entering the Indian territory illegally and handed over to the authorities of the neighbouring country, He, however, did not mention the sector of the India-Bangladesh border, where they were held. The chief minister said on X, quote ‘No place for illegal infiltration in Assam, carrying out their strict monitoring against infiltration attempts, Assam police apprehended 6 Bangladeshi nationals and pushed them across the border,” unquote.Meanwhile, the police in Uttar Pradesh's Bijnor said today they arrested the main accused in the abduction of comedian Sunil Pal and actor Mushtaq Mohammed Khan after an encounter late Sunday. While the police arrested the main accused Lavi Pal, his accomplice Himanshu managed to escape during the cross-firing. Lavi Pal was carrying a reward of Rs 25,000 on his head and had been absconding since being booked by the Meerut and Bijnor police for abduction or ransom of Mushtaq Khan on 20th of November and Sunil Pal on 2nd of December.On the global front, US President-elect Donald Trump announced the appointment of Sriram Krishnan, an aide of billionaire Elon Musk and Microsoft's ex-employee, as the Senior Policy Advisor for Artificial Intelligence at the White House Office of Science and Technology Policy. Trump, in a post on his social media platform Truth Social said, quote “Sriram Krishnan will focus on ensuring continued American leadership in AI and help shape and coordinate AI policy across Government, including working with the President's Council of Advisors on Science and Technology.” unquote.This was the Catch Up on 3 Things by The Indian Express.
Himanshu Sahay is the Co-Founder and CTO of Arch. In this episode recorded live at Emergence in Prague, Sahay and The Block's Frank Chaparro discuss the evolution of crypto lending after the 2022 market turmoil and Arch's approach to expansion and collateral. OUTLINE 00:00 Introduction 01:16 Intro to Arch 03:15 Shifting collateral standards 07:33 The state of crypto credit 12:48 New administration, new regulation 14:35 Debanking and Barron Trump 17:41 Arch's global expansion 19:29 Encounters with Voyager and Celsius 20:47 Conclusion GUEST LINKS Himanshu Sahay - https://www.linkedin.com/in/himanshusahay/ Himanshu Sahay on X - https://x.com/hhsahay Arch - https://archlending.com/ Arch on X - https://x.com/ArchLending This episode is brought to you by our sponsor: Polkadot Polkadot is the blockspace ecosystem for boundless innovation. To discover more, head to polkadot.network
What now? What next? Insights into Australia's tertiary education sector
In this episode of the podcast Claire is joined by Prof Himashu Rai from the Indian Institute of Management Indore. What IIM Indore is doing to educate the next generation of Indian business leaders and engineers has to be heard to be believed.If you want to know more about IIM Indore - their website is: https://iimidr.ac.in/ and you can find Prof. Rai on LinkedIn:https://www.linkedin.com/in/askhimanshurai/Contact Claire: Connect with me on LinkedIn: Claire Field Follow me on Bluesky: @clairefield.bsky.social Check out the news pages on my website: clairefield.com.au Email me at: admin@clairefield.com.au The ‘What now? What next?' podcast recognises Aboriginal and Torres Strait Islander people as Australia's traditional custodians. In the spirit of reconciliation we are proud to recommend John Briggs Consulting as a leader in Reconciliation and Indigenous engagement. To find out more go to www.johnbriggs.net.au
Have you ever wondered if men and women can truly have platonic friendships—free from romantic tension? Or felt the pressure to abandon opposite-sex friendships in the name of a relationship? In this sizzling, soul-stirring episode of The Brave Table, I sit down with my two favorite brother besties—Himanshu Jocker, founder of Epic Businesses in Jaipur, India, and Erwin B. Valencia, a former mental health coach for the NBA and founder of the Gratitude Gang Foundation in the Philippines. Together, we dive into how opposite-sex friendships can enrich your life, your romantic relationships, and your personal growth. We discuss setting boundaries, navigating jealousy, and even the dynamics of “friend-zoning.” This is a must-listen if you've ever struggled to maintain balance between friendship and romance or wondered how to nurture healthy, meaningful relationships across the gender spectrum.
Featuring Himanshu Warden, Founder and CEO of Thevasa, an Indian fashion company. (Recorded 10/3/24)
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/intellectual-history
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/sociology
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/british-studies
Critical Insights on Colonial Modes of Seeing Cattle in India: Tracing the Pre-history of Green and White Revolutions (Springer 2024) traces the contours of the symbiotic relationship between crop cultivation and cattle rearing in India by reading against the grain of several official accounts from the late colonial period to the 1980s. It also skillfully unpacks the multiple cultural expressions that revolve around cattle in India and the wider subcontinent to show how this domestic animal has greatly impacted political discourses in South Asia from colonial times, into the postcolonial period. The author begins by demonstrating the dependence between the nomadic cattle breeder and the settled cultivator, at the nexus of land-livestock-agriculture, as indicated in the writings of Sir Albert Howard, who espoused some of the most sophisticated ideas on integration, holism, and mixed farming in an era when agricultural research was marked by increasing specialisation and compartmentalization. The book springboards with the views of colonial experts who worked at imperial science institutions but passionately voiced dissenting opinions due to their emotional investment in the lives of Indian peasants, of whom Howard was a leading light. The book presents Howard and his contemporaries' writings to then engage contemporary debates surrounding organic agriculture and climate change, tracing the path out of the treadmill of industrial agriculture and factory farming. In doing so, the book shows how, historically, animal rearing has been critically linked to livelihood strategies in the Indian subcontinent. At once a dispassionate reflection on the role played by cattle and water buffaloes in not just supporting farm operations in the agro-pastoral landscape, but also in contributing to millions of livelihoods in sustainable ways while fulfilling the animal protein in the Indian diet, the book presents contemporary lessons on development perspectives relating to sustainable and holistic agriculture. A rich and sweeping treatment of this aspect of environmental history in India that tackles the transformations prompted by the arrival of veterinary medicine, veterinary education and notions of scientific livestock management, the book is a rare read for historians, environmentalists, agriculturalists, development practitioners, and animal studies scholars with a particular interest in South Asia. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/animal-studies
In this episode, Himanshu and Prem discuss the intricacies of budget optimization in PPC advertising, delving into the importance of strategic budget allocation, the significance of ad type level budgeting, and the utility of budget rules.Common mistakes in budgeting practices are also addressed, providing with valuable insights into optimizing the advertising strategies for better ROI.RESOURCESRead our News Feed.Book an Amazon Advertising audit.Follow me on Twitter.Amazon design examples.Follow our team.$85 to $117k in 45 days. 2-minute breakdown of what we did.Message George.
Himanshu Verma, VP of Engineering and Country Leader at Eventbrite - the world's largest and most trusted events marketplace. At Eventbrite, Himanshu oversee's the development team that builds cutting edge cloud, mobile and marketplace technology. Himanshu's career spans more than two decades of engineering and product development leadership at some of India's and the world's largest and best known tech companies, including Oracle, Yahoo!, Flipkart, and most recently, Amazon. In this episode I chat with Himanshu about where he's seeing the most value from implementing AI, his experience working in both big tech and startups, and his upbringing in a small town in Northern India. --------------- The Asian Tech Leaders podcast is proudly supported by Vultr, an advanced cloud platform that is revolutionizing how developers build and deploy applications. Their cloud infrastructure, featuring globally available cloud compute, offers unparalleled performance without the vendor lock-in or outrageous egress charges. See what all the buzz is about when you visit GetVultr.com/ATL and use code ATL250 for $250 in cloud credit.
Himanshu, a core contributor to Sentient, discusses the vision and mission of the project in this conversation. Sentient aims to create a decentralized alternative to centralized AI, where contributors are rewarded for their contributions and the AI economy is more participatory. The project recently raised $85 million in funding led by Peter Thiel's fund. Himanshu explains that while $85 million may seem like a lot in the crypto world, it is not enough considering the expensive resources required for AI, such as compute and talent. He discusses his background in academia and his journey into building different systems related to blockchain and AI. He also explains how the idea for Sentient came about and the decision to focus on building a counterpart to centralized AI. He emphasizes the importance of participation in the AI economy and the need for a more inclusive and decentralized approach. He addresses the market forces that favor crypto AI, such as the availability of compute and the potential for a more powerful economic flywheel. He also discusses the challenges of attracting AI talent to the crypto space and explains how Sentient aims to build models and create an open economy where anyone can contribute and earn rewards. Sentient aims to solve the monetization problem of open source AI models through their Open Monetizable Loyal (OML) Models and Other Artifacts. OML models are open source, can be monetized, and are loyal to the builder's preferred alignment and safety rules. The OML protocol uses backdoor attacks as a basic primitive to tie ownership and monetization to the actual model. Sentient plans to attract and incentivize AI developers by offering distribution and revenue opportunities for their models. The platform will be released in a demo version at DevCon, with hackathons and limited circles experiencing it before that. Himanshu's Twitter: https://x.com/hstyagi Sentient's Twitter: https://x.com/sentient_agi Chapters 00:00 Introduction and Funding in Crypto AI 03:29 The Vision: Building a Decentralized Alternative to Centralized AI 09:07 Creating an Open and Participatory AI Economy 14:51 Sentient as an AI Company: Building Models and Providing AI 19:38 The Potential of Crypto AI and Access to Capital 30:38 Attracting AI Talent and the Role of the Younger Generation 34:32 The Future of Crypto AI: A More Inclusive and Decentralized AI Economy 35:01 Solving the Monetization Problem of Open Source AI Models 44:41 Introducing OML Models: Open, Monetizable, and Loyal 48:31 Using Backdoor Attacks to Tie Ownership and Monetization 51:25 Attracting and Incentivizing AI Developers with Sentient 01:00:33 Upcoming Release and Hackathons at DevCon Disclosures This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast. Our current show features paid sponsorships which may be featured at the start, middle, and/or the end of the episode. These sponsorships are for informational purposes only and are not a solicitation to use any product, service or token.
In this episode, Mariah Muhammad speaks with Himanshu Tiwari, Director of Quality and Risk Management at Sun Life Health Medical and Dental, about the challenges and opportunities in integrating medical and dental care, the need for robust quality metrics in dental health, and strategies for effective leadership in the evolving healthcare landscape.
Dubai Michelin-star Chef Himanshu Saini talks about the importance of representing your culture when starting in the culinary industry, what it means to be a Michelin-star Chef, and more.See omnystudio.com/listener for privacy information.
Continuing our special 10-part series on the Virtually Speaking Podcast: "Exploring VMware Cloud Foundation" in Episode 4,titled “VCF Compute”, Himanshu Singh, Director of vSphere Product Marketing, navigates us through the spectrum of vSphere editions, highlighting their adaptability for diverse customer needs. He then showcases the enhanced value proposition of vSphere within VMware Cloud Foundation, harnessing the synergy with NSX and Aria Automation to elevate private cloud infrastructures. Drawing from the essence of VMware vSphere, Himanshu emphasizes its role as the enterprise workload engine, integrating cutting-edge cloud infrastructure technology with DPU and GPU-based acceleration to amplify workload performance. vSphere optimizes IT environments, bolstering availability, simplifying lifecycle management, and streamlining maintenance for heightened operational efficiency. Moreover, it establishes an intrinsically secure infrastructure engine, fortified out-of-the-box and complemented by straightforward hardening guidance for compliance adherence. Links Mentioned: VCF Landing Page Announcing General Availability of VMware Cloud Foundation 5.1.1 VCF Webinars VCF YouTube Page Virtually Speaking YouTube Page Virtually Speaking Podcast Watch the Entire Series Ep 01: Inside the Private Cloud Ep 02: What's Inside Ep 03: The Cloud Admin Journey Ep 04: VCF Compute Ep 05: VCF Storage Ep 06: VCF Networking Ep 07: A Cloud Management Experience Ep 08: VMware Private AI Ep 09: Data Services Manager Ep 10: VMware vDefend The Virtually Speaking Podcast The Virtually Speaking Podcast is a technical podcast dedicated to discussing VMware topics related to private and hybrid cloud. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and from within the industry to discuss their respective areas of expertise. If you're new to the Virtually Speaking Podcast check out all episodes on vspeakingpodcast.com and follow on TwitterX @VirtSpeaking
In this episode, we speak with Himanshu Kohli, Co-Founder of Client Associates. Client Associates is a leading private wealth management firm dedicated to providing personalized financial solutions to high-net-worth individuals and families. With a strong presence in the Indian financial landscape, Client Associates has built a reputation for trust and excellence, guiding their clients through complex wealth management processes. Himanshu Kohli's journey is one of vision, innovation, and a deep understanding of wealth dynamics. Since its inception, Client Associates has evolved to address the growing needs of India's affluent, adapting to changing market conditions and client expectations. Himanshu's commitment to building a firm that not only manages wealth but also plans for succession and legacy is truly inspiring. His insights into building capabilities, fostering trust, and understanding the mindset of the wealthy in India offer valuable lessons for anyone interested in wealth management. In our conversation, we delve into the origins of Client Associates, the challenges and successes faced along the way, and the future of private wealth management in India. Himanshu shares his personal experiences, life learnings, and advice for the younger generation aspiring to enter this field. This episode provides a comprehensive look at the intricacies of managing wealth and the importance of adapting to a rapidly changing environment. We're also very delighted to share that we are now a part of the Zerodha Collective Network! Grateful to be a part of and have the support. Here's what we talked about: 0:00 - Preview 0:48 - Introduction 1:38 - Emerging Indian Wealth 4:22 - Starting Private Wealth Management 11:56 - Entering the Market 19:26 - Evolution of Client Associates 22:36 - Planning Succession Wealth 28:32 - Building Capabilities 33:04 - Building Trust 36:48 - How the Rich in India Think 41:56 - Adapting to Change 49:43 - Managing Risk 52:59 - Learning from Clients 53:42 - Himanshu's Life Lessons 56:11 - Leading Client Associates 1:03:48 - Life Beyond Client Associates 1:08:48 - His Biggest Superpower 1:09:48 - Advice for the Younger Generation 1:12:45 - Outro Tune in to gain insights from one of the leading minds in private wealth management and learn how to navigate the complexities of wealth creation and preservation Instagram of Jivraj - https://www.instagram.com/jivrajsinghsachar/ #indiansiliconvalley #isv #indiansiliconvalleypodcast #isvpodcast #jivrajsinghsachar
It's the Brownload on my side - Nice to meet you! This podcast comes three by one! And if none of that makes sense, you need to hit play! I've got tales of human encounters from my travels, Kej meets Himanshu-kaka and Sach is getting stressed out while having a massage!
Himanshu Palsule is CEO of Cornerstone, which provides a workforce agility platform to identify skill gaps and development opportunities within organizations. Himanshu joined as CEO in Jan 2022, bringing more than 35 years of diverse experience leading global organizations. Prior to joining Cornerstone, he was President of Epicor where he was responsible for managing vertical businesses and overseeing product operations. Palsule was previously CTO and Head of Strategy at Sage Software. Key highlights: Learning to live within your own skin and recalibrating yourself Concentric circles of evaluating ideas - Control, Influence, Interest Key trait when you have a seat at the table: Humility What is important when you have to say “I don't know” Going beyond the cliche and making customer obsession real What does it really mean to have a flat organization What does a future-ready Product Manager look like Connect with Himanshu Palsule, CEO of Cornerstone: https://www.linkedin.com/in/himanshu-palsule/ Connect with Rahul Abhyankar, Host of Product Leader's Journey: https://www.linkedin.com/in/rahulabhyankar/
1000+ Prisoners Pardoned For Eid!Residents will enjoy 4 days of free parking this eid break!A resident received an adorable message from the driver after tipping him!Joined by Bhupender nath the founder of Trèsind Studio and Chef Himanshu Saini
Joined by Bhupender nath the founder of Trèsind Studio and Chef Himanshu Saini
In this episode of the Life Science Success Podcast, Don interviews Himanshu Gadgil from Enzine during Bio International. Himanshu explains Enzine's dual focus on developing biosimilars and their role as a CDMO with a game-changing platform, Enzine X. They discuss how Enzine's continuous manufacturing technology significantly cuts costs and speeds up market access for clients. Himanshu also talks about Enzine's global presence, clientele, and their future plans for expansion in the U.S. and India. Tune in to learn about the innovative approaches Enzine is bringing to the biotech industry. 00:00 Welcome and Introduction 00:07 Overview of Enzine Biotech 00:26 Enzine's Unique Manufacturing Platform 01:08 Clientele and Market Reach 01:41 Timelines and Efficiency 02:08 Future of Continuous Manufacturing 04:22 Geographical Expansion and Workforce 05:02 Enzine's Differentiators 06:10 Conclusion and Final Thoughts
Join Himanshu on his emotional journey as he shares his experience of moving back to India after 18 years abroad. From Singapore to Canada and now back to his homeland, Himanshu opens up about his background, life in different countries, the decision-making process behind the moves, and why India holds a special place in his heart. He discusses the challenges and joys of relocating, the planning involved, settling into a new life in India, and offers valuable advice for those considering a similar move. Don't miss this insightful and heartfelt story of rediscovering roots and embracing new beginnings.
Oracle has been actively focusing on bringing AI to the enterprise at every layer of its tech stack, be it SaaS apps, AI services, infrastructure, or data. In this episode, hosts Lois Houston and Nikita Abraham, along with senior instructors Hemant Gahankari and Himanshu Raj, discuss OCI AI and Machine Learning services. They also go over some key OCI Data Science concepts and responsible AI principles. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let's go! 00:33 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:46 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hey everyone! In our last episode, we dove into Generative AI and Language Learning Models. Lois: Yeah, that was an interesting one. But today, we're going to discuss the AI and machine learning services offered by Oracle Cloud Infrastructure, and we'll look at the OCI AI infrastructure. Nikita: I'm also going to try and squeeze in a couple of questions on a topic I'm really keen about, which is responsible AI. To take us through all of this, we have two of our colleagues, Hemant Gahankari and Himanshu Raj. Hemant is a Senior Principal OCI Instructor and Himanshu is a Senior Instructor on AI/ML. So, let's get started! 01:36 Lois: Hi Hemant! We're so excited to have you here! We know that Oracle has really been focusing on bringing AI to the enterprise at every layer of our stack. Hemant: It all begins with data and infrastructure layers. OCI AI services consume data, and AI services, in turn, are consumed by applications. This approach involves extensive investment from infrastructure to SaaS applications. Generative AI and massive scale models are the more recent steps. Oracle AI is the portfolio of cloud services for helping organizations use the data they may have for the business-specific uses. Business applications consume AI and ML services. The foundation of AI services and ML services is data. AI services contain pre-built models for specific uses. Some of the AI services are pre-trained, and some can be additionally trained by the customer with their own data. AI services can be consumed by calling the API for the service, passing in the data to be processed, and the service returns a result. There is no infrastructure to be managed for using AI services. 02:58 Nikita: How do I access OCI AI services? Hemant: OCI AI services provide multiple methods for access. The most common method is the OCI Console. The OCI Console provides an easy to use, browser-based interface that enables access to notebook sessions and all the features of all the data science, as well as AI services. The REST API provides access to service functionality but requires programming expertise. And API reference is provided in the product documentation. OCI also provides programming language SDKs for Java, Python, TypeScript, JavaScript, .Net, Go, and Ruby. The command line interface provides both quick access and full functionality without the need for scripting. 03:52 Lois: Hemant, what are the types of OCI AI services that are available? Hemant: OCI AI services is a collection of services with pre-built machine learning models that make it easier for developers to build a variety of business applications. The models can also be custom trained for more accurate business results. The different services provided are digital assistant, language, vision, speech, document understanding, anomaly detection. 04:24 Lois: I know we're going to talk about them in more detail in the next episode, but can you introduce us to OCI Language, Vision, and Speech? Hemant: OCI Language allows you to perform sophisticated text analysis at scale. Using the pre-trained and custom models, you can process unstructured text to extract insights without data science expertise. Pre-trained models include language detection, sentiment analysis, key phrase extraction, text classification, named entity recognition, and personal identifiable information detection. Custom models can be trained for named entity recognition and text classification with domain-specific data sets. In text translation, natural machine translation is used to translate text across numerous languages. Using OCI Vision, you can upload images to detect and classify objects in them. Pre-trained models and custom models are supported. In image analysis, pre-trained models perform object detection, image classification, and optical character recognition. In image analysis, custom models can perform custom object detection by detecting the location of custom objects in an image and providing a bounding box. The OCI Speech service is used to convert media files to readable texts that's stored in JSON and SRT format. Speech enables you to easily convert media files containing human speech into highly exact text transcriptions. 06:12 Nikita: That's great. And what about document understanding and anomaly detection? Hemant: Using OCI document understanding, you can upload documents to detect and classify text and objects in them. You can process individual files or batches of documents. In OCR, document understanding can detect and recognize text in a document. In text extraction, document understanding provides the word level and line level text, and the bounding box, coordinates of where the text is found. In key value extraction, document understanding extracts a predefined list of key value pairs of information from receipts, invoices, passports, and driver IDs. In table extraction, document understanding extracts content in tabular format, maintaining the row and column relationship of cells. In document classification, the document understanding classifies documents into different types. The OCI Anomaly Detection service is a service that analyzes large volume of multivariate or univariate time series data. The Anomaly Detection service increases the reliability of businesses by monitoring their critical assets and detecting anomalies early with high precision. Anomaly Detection is the identification of rare items, events, or observations in data that differ significantly from the expectation. 07:55 Nikita: Where is Anomaly Detection most useful? Hemant: The Anomaly Detection service is designed to help with analyzing large amounts of data and identifying the anomalies at the earliest possible time with maximum accuracy. Different sectors, such as utility, oil and gas, transportation, manufacturing, telecommunications, banking, and insurance use Anomaly Detection service for their day-to-day activities. 08:23 Lois: Ok…and the first OCI AI service you mentioned was digital assistant… Hemant: Oracle Digital Assistant is a platform that allows you to create and deploy digital assistants, which are AI driven interfaces that help users accomplish a variety of tasks with natural language conversations. When a user engages with the Digital Assistant, the Digital Assistant evaluates the user input and routes the conversation to and from the appropriate skills. Digital Assistant greets the user upon access. Upon user requests, list what it can do and provide entry points into the given skills. It routes explicit user requests to the appropriate skills. And it also handles interruptions to flows and disambiguation. It also handles requests to exit the bot. 09:21 Nikita: Excellent! Let's bring Himanshu in to tell us about machine learning services. Hi Himanshu! Let's talk about OCI Data Science. Can you tell us a bit about it? Himanshu: OCI Data Science is the cloud service focused on serving the data scientist throughout the full machine learning life cycle with support for Python and open source. The service has many features, such as model catalog, projects, JupyterLab notebook, model deployment, model training, management, model explanation, open source libraries, and AutoML. 09:56 Lois: Himanshu, what are the core principles of OCI Data Science? Himanshu: There are three core principles of OCI Data Science. The first one, accelerated. The first principle is about accelerating the work of the individual data scientist. OCI Data Science provides data scientists with open source libraries along with easy access to a range of compute power without having to manage any infrastructure. It also includes Oracle's own library to help streamline many aspects of their work. The second principle is collaborative. It goes beyond an individual data scientist's productivity to enable data science teams to work together. This is done through the sharing of assets, reducing duplicative work, and putting reproducibility and auditability of models for collaboration and risk management. Third is enterprise grade. That means it's integrated with all the OCI Security and access protocols. The underlying infrastructure is fully managed. The customer does not have to think about provisioning compute and storage. And the service handles all the maintenance, patching, and upgrades so user can focus on solving business problems with data science. 11:11 Nikita: Let's drill down into the specifics of OCI Data Science. So far, we know it's cloud service to rapidly build, train, deploy, and manage machine learning models. But who can use it? Where is it? And how is it used? Himanshu: It serves data scientists and data science teams throughout the full machine learning life cycle. Users work in a familiar JupyterLab notebook interface, where they write Python code. And how it is used? So users preserve their models in the model catalog and deploy their models to a managed infrastructure. 11:46 Lois: Walk us through some of the key terminology that's used. Himanshu: Some of the important product terminology of OCI Data Science are projects. The projects are containers that enable data science teams to organize their work. They represent collaborative work spaces for organizing and documenting data science assets, such as notebook sessions and models. Note that tenancy can have as many projects as needed without limits. Now, this notebook session is where the data scientists work. Notebook sessions provide a JupyterLab environment with pre-installed open source libraries and the ability to add others. Notebook sessions are interactive coding environment for building and training models. Notebook sessions run in a managed infrastructure and the user can select CPU or GPU, the compute shape, and amount of storage without having to do any manual provisioning. The other important feature is Conda environment. It's an open source environment and package management system and was created for Python programs. 12:53 Nikita: What is a Conda environment used for? Himanshu: It is used in the service to quickly install, run, and update packages and their dependencies. Conda easily creates, saves, loads, and switches between environments in your notebooks sessions. 13:07 Nikita: Earlier, you spoke about the support for Python in OCI Data Science. Is there a dedicated library? Himanshu: Oracle's Accelerated Data Science ADS SDK is a Python library that is included as part of OCI Data Science. ADS has many functions and objects that automate or simplify the steps in the data science workflow, including connecting to data, exploring, and visualizing data. Training a model with AutoML, evaluating models, and explaining models. In addition, ADS provides a simple interface to access the data science service mode model catalog and other OCI services, including object storage. 13:45 Lois: I also hear a lot about models. What are models? Himanshu: Models define a mathematical representation of your data and business process. You create models in notebooks, sessions, inside projects. 13:57 Lois: What are some other important terminologies related to models? Himanshu: The next terminology is model catalog. The model catalog is a place to store, track, share, and manage models. The model catalog is a centralized and managed repository of model artifacts. A stored model includes metadata about the provenance of the model, including Git-related information and the script. Our notebook used to push the model to the catalog. Models stored in the model catalog can be shared across members of a team, and they can be loaded back into a notebook session. The next one is model deployments. Model deployments allow you to deploy models stored in the model catalog as HTTP endpoints on managed infrastructure. 14:45 Lois: So, how do you operationalize these models? Himanshu: Deploying machine learning models as web applications, HTTP API endpoints, serving predictions in real time is the most common way to operationalize models. HTTP endpoints or the API endpoints are flexible and can serve requests for the model predictions. Data science jobs enable you to define and run a repeatable machine learning tasks on fully managed infrastructure. Nikita: Thanks for that, Himanshu. 15:18 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security, artificial intelligence, and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit mylearn.oracle.com to get started. 15:46 Nikita: Welcome back! The Oracle AI Stack consists of AI services and machine learning services, and these services are built using AI infrastructure. So, let's move on to that. Hemant, what are the components of OCI AI Infrastructure? Hemant: OCI AI Infrastructure is mainly composed of GPU-based instances. Instances can be virtual machines or bare metal machines. High performance cluster networking that allows instances to communicate to each other. Super clusters are a massive network of GPU instances with multiple petabytes per second of bandwidth. And a variety of fully managed storage options from a single byte to exabytes without upfront provisioning are also available. 16:35 Lois: Can we explore each of these components a little more? First, tell us, why do we need GPUs? Hemant: ML and AI needs lots of repetitive computations to be made on huge amounts of data. Parallel computing on GPUs is designed for many processes at the same time. A GPU is a piece of hardware that is incredibly good in performing computations. GPU has thousands of lightweight cores, all working on their share of data in parallel. This gives them the ability to crunch through extremely large data set at tremendous speed. 17:14 Nikita: And what are the GPU instances offered by OCI? Hemant: GPU instances are ideally suited for model training and inference. Bare metal and virtual machine compute instances powered by NVIDIA GPUs H100, A100, A10, and V100 are made available by OCI. 17:35 Nikita: So how do we choose what to train from these different GPU options? Hemant: For large scale AI training, data analytics, and high performance computing, bare metal instances BM 8 X NVIDIA H100 and BM 8 X NVIDIA A100 can be used. These provide up to nine times faster AI training and 30 times higher acceleration for AI inferencing. The other bare metal and virtual machines are used for small AI training, inference, streaming, gaming, and virtual desktop infrastructure. 18:14 Lois: And why would someone choose the OCI AI stack over its counterparts? Hemant: Oracle offers all the features and is the most cost effective option when compared to its counterparts. For example, BM GPU 4.8 version 2 instance costs just $4 per hour and is used by many customers. Superclusters are a massive network with multiple petabytes per second of bandwidth. It can scale up to 4,096 OCI bare metal instances with 32,768 GPUs. We also have a choice of bare metal A100 or H100 GPU instances, and we can select a variety of storage options, like object store, or block store, or even file system. For networking speeds, we can reach 1,600 GB per second with A100 GPUs and 3,200 GB per second with H100 GPUs. With OCI storage, we can select local SSD up to four NVMe drives, block storage up to 32 terabytes per volume, object storage up to 10 terabytes per object, file systems up to eight exabyte per file system. OCI File system employs five replicated storage located in different fault domains to provide redundancy for resilient data protection. HPC file systems, such as BeeGFS and many others are also offered. OCI HPC file systems are available on Oracle Cloud Marketplace and make it easy to deploy a variety of high performance file servers. 20:11 Lois: I think a discussion on AI would be incomplete if we don't talk about responsible AI. We're using AI more and more every day, but can we actually trust it? Hemant: For us to trust AI, it must be driven by ethics that guide us as well. Nikita: And do we have some principles that guide the use of AI? Hemant: AI should be lawful, complying with all applicable laws and regulations. AI should be ethical, that is it should ensure adherence to ethical principles and values that we uphold as humans. And AI should be robust, both from a technical and social perspective. Because even with the good intentions, AI systems can cause unintentional harm. AI systems do not operate in a lawless world. A number of legally binding rules at national and international level apply or are relevant to the development, deployment, and use of AI systems today. The law not only prohibits certain actions but also enables others, like protecting rights of minorities or protecting environment. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications. For instance, the medical device regulation in the health care sector. In AI context, equality entails that the systems' operations cannot generate unfairly biased outputs. And while we adopt AI, citizens right should also be protected. 21:50 Lois: Ok, but how do we derive AI ethics from these? Hemant: There are three main principles. AI should be used to help humans and allow for oversight. It should never cause physical or social harm. Decisions taken by AI should be transparent and fair, and also should be explainable. AI that follows the AI ethical principles is responsible AI. So if we map the AI ethical principles to responsible AI requirements, these will be like, AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight. AI systems and environments in which they operate must be safe and secure, they must be technically robust, and should not be open to malicious use. The development, and deployment, and use of AI systems must be fair, ensuring equal and just distribution of both benefits and costs. AI should be free from unfair bias and discrimination. Decisions taken by AI to the extent possible should be explainable to those directly and indirectly affected. 23:21 Nikita: This is all great, but what does a typical responsible AI implementation process look like? Hemant: First, a governance needs to be put in place. Second, develop a set of policies and procedures to be followed. And once implemented, ensure compliance by regular monitoring and evaluation. Lois: And this is all managed by developers? Hemant: Typical roles that are involved in the implementation cycles are developers, deployers, and end users of the AI. 23:56 Nikita: Can we talk about AI specifically in health care? How do we ensure that there is fairness and no bias? Hemant: AI systems are only as good as the data that they are trained on. If that data is predominantly from one gender or racial group, the AI systems might not perform as well on data from other groups. 24:21 Lois: Yeah, and there's also the issue of ensuring transparency, right? Hemant: AI systems often make decisions based on complex algorithms that are difficult for humans to understand. As a result, patients and health care providers can have difficulty trusting the decisions made by the AI. AI systems must be regularly evaluated to ensure that they are performing as intended and not causing harm to patients. 24:49 Nikita: Thank you, Hemant and Himanshu, for this really insightful session. If you're interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. Lois: That's right, Niki. You'll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we'll get into the OCI AI Services we discussed today and talk about them in more detail. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 25:25 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode of the IoT For All Podcast, Himanshu Mehrotra, Vice President of Product Management at FourKites, joins Ryan Chacon to discuss IoT in the freight industry. The conversation covers how IoT and AI enable revolutionary shipping visibility solutions, such as real-time transport visibility platforms, the importance of IoT in providing real-time data and predictive analytics to ensure timely production and delivery, automating gate operations and utilizing smart labels for granular tracking, data standardization and intermittent connectivity challenges, computer vision and cameras as sensors, and using generative AI to offer recommendations for optimizing the supply chain. Himanshu Mehrotra is the Vice President of Product Management at FourKites. He oversees the core shipment visibility solutions and the data platform that enables FourKites to connect to external ecosystems of data signals, either from direct integrations, signals from IoT devices, or onboarded devices on assets. Himanshu has over two decades of experience in supply chain technology. Prior to FourKites, he shaped the solution strategy and go-to-market approach for control tower and logistics solutions at Blue Yonder, where he also leveraged machine learning-based predictive analytics. Himanshu has been at the forefront of the supply chain industry's most transformational changes, witnessing firsthand the profound impact of events such as the pandemic on retail and manufacturing and the revolutionary influence of innovators like Amazon on customer behavior. FourKites extends real-time visibility beyond transportation into yards, warehouses, stores, and more, tracking over 3.2 million shipments daily across 200+ countries and territories. FourKites combines real-time data and powerful AI to help companies make their supply chain more efficient. Over 1,500 of the world's most recognized brands - including 9 of the top 10 CPG and 18 of the top 20 food and beverage companies - trust FourKites to reduce costs and increase customer satisfaction. Discover more about supply chain and IoT at https://www.iotforall.com More about FourKites: https://www.fourkites.com Connect with Himanshu: https://www.linkedin.com/in/himanshu-mehrotra-0401772/ (00:00) Intro (00:23) Himanshu Mehrotra and FourKites (03:14) IoT in the freight industry and supply chain (08:04) Advancements in IoT devices and connectivity (10:32) What does real-time IoT data enable? (12:42) Data standardization (14:07) Intermittent connectivity challenges (15:47) Turning data into value with AI (19:39) Using generative AI in the supply chain (21:03) What is the future of IoT in the freight industry (23:46) Cameras as sensors and computer vision (26:30) Learn more and follow up Subscribe on YouTube: https://bit.ly/2NlcEwm Join Our Newsletter: https://www.iotforall.com/iot-newsletter Follow Us on Social: https://linktr.ee/iot4all Check out the IoT For All Media Network: https://www.iotforall.com/podcast-overview
In this week's episode, Lois Houston and Nikita Abraham, along with Senior Instructor Himanshu Raj, take you through the extraordinary capabilities of Generative AI, a subset of deep learning that doesn't make predictions but rather creates its own content. They also explore the workings of Large Language Models. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let's go! 00:33 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:46 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! In our last episode, we went over the basics of deep learning. Today, we'll look at generative AI and large language models, and discuss how they work. To help us with that, we have Himanshu Raj, Senior Instructor on AI/ML. So, let's jump right in. Hi Himanshu, what is generative AI? 01:21 Himanshu: Generative AI refers to a type of AI that can create new content. It is a subset of deep learning, where the models are trained not to make predictions but rather to generate output on their own. Think of generative AI as an artist who looks at a lot of paintings and learns the patterns and styles present in them. Once it has learned these patterns, it can generate new paintings that resembles what it learned. 01:48 Lois: Let's take an example to understand this better. Suppose we want to train a generative AI model to draw a dog. How would we achieve this? Himanshu: You would start by giving it a lot of pictures of dogs to learn from. The AI does not know anything about what a dog looks like. But by looking at these pictures, it starts to figure out common patterns and features, like dogs often have pointy ears, narrow faces, whiskers, etc. You can then ask it to draw a new picture of a dog. The AI will use the patterns it learned to generate a picture that hopefully looks like a dog. But remember, the AI is not copying any of the pictures it has seen before but creating a new image based on the patterns it has learned. This is the basic idea behind generative AI. In practice, the process involves a lot of complex maths and computation, and there are different techniques and architectures that can be used, such as variational autoencoders (VAs) and Generative Adversarial Networks (GANs). 02:48 Nikita: Himanshu, where is generative AI used in the real world? Himanshu: Generative AI models have a wide variety of applications across numerous domains. For the image generation, generative models like GANs are used to generate realistic images. They can be used for tasks, like creating artwork, synthesizing images of human faces, or transforming sketches into photorealistic images. For text generation, large language models like GPT 3, which are generative in nature, can create human-like text. This has applications in content creation, like writing articles, generating ideas, and again, conversational AI, like chat bots, customer service agents. They are also used in programming for code generation and debugging, and much more. For music generation, generative AI models can also be used. They create new pieces of music after being trained on a specific style or collection of tunes. A famous example is OpenAI's MuseNet. 03:42 Lois: You mentioned large language models in the context of text-based generative AI. So, let's talk a little more about it. Himanshu, what exactly are large language models? Himanshu: LLMs are a type of artificial intelligence models built to understand, generate, and process human language at a massive scale. They were primarily designed for sequence to sequence tasks such as machine translation, where an input sequence is transformed into an output sequence. LLMs can be used to translate text from one language to another. For example, an LLM could be used to translate English text into French. To do this job, LLM is trained on a massive data set of text and code which allows it to learn the patterns and relationships that exist between different languages. The LLM translates, “How are you?” from English to French, “Comment allez-vous?” It can also answer questions like, what is the capital of France? And it would answer the capital of France is Paris. And it will write an essay on a given topic. For example, write an essay on French Revolution, and it will come up with a response like with a title and introduction. 04:53 Lois: And how do LLMs actually work? Himanshu: So, LLM models are typically based on deep learning architectures such as transformers. They are also trained on vast amount of text data to learn language patterns and relationships, again, with a massive number of parameters usually in order of millions or even billions. LLMs have also the ability to comprehend and understand natural language text at a semantic level. They can grasp context, infer meaning, and identify relationships between words and phrases. 05:26 Nikita: What are the most important factors for a large language model? Himanshu: Model size and parameters are crucial aspects of large language models and other deep learning models. They significantly impact the model's capabilities, performance, and resource requirement. So, what is model size? The model size refers to the amount of memory required to store the model's parameter and other data structures. Larger model sizes generally led to better performance as they can capture more complex patterns and representation from the data. The parameters are the numerical values of the model that change as it learns to minimize the model's error on the given task. In the context of LLMs, parameters refer to the weights and biases of the model's transformer layers. Parameters are usually measured in terms of millions or billions. For example, GPT-3, one of the largest LLMs to date, has 175 billion parameters making it extremely powerful in language understanding and generation. Tokens represent the individual units into which a piece of text is divided during the processing by the model. In natural language, tokens are usually words, subwords, or characters. Some models have a maximum token limit that they can process and longer text can may require truncation or splitting. Again, balancing model size, parameters, and token handling is crucial when working with LLMs. 06:49 Nikita: But what's so great about LLMs? Himanshu: Large language models can understand and interpret human language more accurately and contextually. They can comprehend complex sentence structures, nuances, and word meanings, enabling them to provide more accurate and relevant responses to user queries. This model can generate human-like text that is coherent and contextually appropriate. This capability is valuable for context creation, automated writing, and generating personalized response in applications like chatbots and virtual assistants. They can perform a variety of tasks. Large language models are very versatile and adaptable to various industries. They can be customized to excel in applications such as language translation, sentiment analysis, code generation, and much more. LLMs can handle multiple languages making them valuable for cross-lingual tasks like translation, sentiment analysis, and understanding diverse global content. Large language models can be again, fine-tuned for a specific task using a minimal amount of domain data. The efficiency of LLMs usually grows with more data and parameters. 07:55 Lois: You mentioned the “sequence to sequence tasks” earlier. Can you explain the concept in simple terms for us? Himanshu: Understanding language is difficult for computers and AI systems. The reason being that words often have meanings based on context. Consider a sentence such as Jane threw the frisbee, and her dog fetched it. In this sentence, there are a few things that relate to each other. Jane is doing the throwing. The dog is doing the fetching. And it refers to the frisbee. Suppose we are looking at the word “it” in the sentence. As a human, we understand easily that “it” refers to the frisbee. But for a machine, it can be tricky. The goal in sequence problems is to find patterns, dependencies, or relationships within the data and make predictions, classification, or generate new sequences based on that understanding. 08:48 Lois: And where are sequence models mostly used? Himanshu: Some common example of sequence models includes natural language processing, which we call NLP, tasks such as machine translation, text generation sentiment analysis, language modeling involve dealing with sequences of words or characters. Speech recognition. Converting audio signals into text, involves working with sequences of phonemes or subword units to recognize spoken words. Music generation. Generating new music involves modeling musical sequences, nodes, and rhythms to create original compositions. Gesture recognition. Sequences of motion or hand gestures are used to interpret human movements for applications, such as sign language recognition or gesture-based interfaces. Time series analysis. In fields such as finance, economics, weather forecasting, and signal processing, time series data is used to predict future values, detect anomalies, and understand patterns in temporal data. 09:56 The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community. Visit mylearn.oracle.com to get started. 10:23 Nikita: Welcome back! Himanshu, what would be the best way to solve those sequence problems you mentioned? Let's use the same sentence, “Jane threw the frisbee, and her dog fetched it” as an example. Himanshu: The solution is transformers. It's like model has a bird's eye view of the entire sentence and can see how all the words relate to each other. This allows it to understand the sentence as a whole instead of just a series of individual words. Transformers with their self-attention mechanism can look at all the words in the sentence at the same time and understand how they relate to each other. For example, transformer can simultaneously understand the connections between Jane and dog even though they are far apart in the sentence. 11:13 Nikita: But how? Himanshu: The answer is attention, which adds context to the text. Attention would notice dog comes after frisbee, fetched comes after dog, and it comes after fetched. Transformer does not look at it in isolation. Instead, it also pays attention to all the other words in the sentence at the same time. But considering all these connections, the model can figure out that “it” likely refers to the frisbee. The most famous current models that are emerging in natural language processing tasks consist of dozens of transformers or some of their variants, for example, GPT or Bert. 11:53 Lois: I was looking at the AI Foundations course on MyLearn and came across the terms “prompt engineering” and “fine tuning.” Can you shed some light on them? Himanshu: A prompt is the input or initial text provided to the model to elicit a specific response or behavior. So, this is something which you write or ask to a language model. Now, what is prompt engineering? So prompt engineering is the process of designing and formulating specific instructions or queries to interact with a large language model effectively. In the context of large language models, such as GPT 3 or Burt, prompts are the input text or questions given to the model to generate responses or perform specific tasks. The goal of prompt engineering is to ensure that the language model understands the user's intent correctly and provide accurate and relevant responses. 12:47 Nikita: That sounds easy enough, but fine tuning seems a bit more complex. Can you explain it with an example? Himanshu: Imagine you have a versatile recipe robot named chef bot. Suppose that chef bot is designed to create delicious recipes for any dish you desire. Chef bot recognizes the prompt as a request for a pizza recipe, and it knows exactly what to do. However, if you want chef bot to be an expert in a particular type of cuisine, such as Italian dishes, you fine-tune chef bot for Italian cuisine by immersing it in a culinary crash course filled with Italian cookbooks, traditional Italian recipes, and even Italian cooking shows. During this process, chef bot becomes more specialized in creating authentic Italian recipes, and this option is called fine tuning. LLMs are general purpose models that are pre-trained on large data sets but are often fine-tuned to address specific use cases. When you combine prompt engineering and fine tuning, and you get a culinary wizard in chef bot, a recipe robot that is not only great at understanding specific dish requests but also capable of following a specific dish requests and even mastering the art of cooking in a particular culinary style. 14:08 Lois: Great! Now that we've spoken about all the major components, can you walk us through the life cycle of a large language model? Himanshu: The life cycle of a Large Language Model, LLM, involves several stages, from its initial pre-training to its deployment and ongoing refinement. The first of this lifecycle is pre-training. The LLM is initially pre-trained on a large corpus of text data from the internet. During pre-training, the model learns grammar, facts, reasoning abilities, and general language understanding. The model predicts the next word in a sentence given the previous words, which helps it capture relationships between words and the structure of language. The second phase is fine tuning initialization. After pre-training, the model's weights are initialized, and it's ready for task-specific fine tuning. Fine tuning can involve supervised learning on labeled data for specific tasks, such as sentiment analysis, translation, or text generation. The model is fine-tuned on specific tasks using a smaller domain-specific data set. The weights from pre-training are updated based on the new data, making the model task aware and specialized. The next phase of the LLM life cycle is prompt engineering. So this phase craft effective prompts to guide the model's behavior in generating specific responses. Different prompt formulations, instructions, or context can be used to shape the output. 15:34 Nikita: Ok… we're with you so far. What's next? Himanshu: The next phase is evaluation and iteration. So models are evaluated using various metrics to access their performance on specific tasks. Iterative refinement involves adjusting model parameters, prompts, and fine tuning strategies to improve results. So as a part of this step, you also do few shot and one shot inference. If needed, you further fine tune the model with a small number of examples. Basically, few shot or a single example, one shot for new tasks or scenarios. Also, you do the bias mitigation and consider the ethical concerns. These biases and ethical concerns may arise in models output. You need to implement measures to ensure fairness in inclusivity and responsible use. 16:28 Himanshu: The next phase in LLM life cycle is deployment. Once the model has been fine-tuned and evaluated, it is deployed for real world applications. Deployed models can perform tasks, such as text generation, translation, summarization, and much more. You also perform monitoring and maintenance in this phase. So you continuously monitor the model's performance and output to ensure it aligns with desired outcomes. You also periodically update and retrain the model to incorporate new data and to adapt to evolving language patterns. This overall life cycle can also consist of a feedback loop, whether you gather feedbacks from users and incorporate it into the model's improvement process. You use this feedback to further refine prompts, fine tuning, and overall model behavior. RLHF, which is Reinforcement Learning with Human Feedback, is a very good example of this feedback loop. You also research and innovate as a part of this life cycle, where you continue to research and develop new techniques to enhance the model capability and address different challenges associated with it. 17:40 Nikita: As we're talking about the LLM life cycle, I see that fine tuning is not only about making an LLM task specific. So, what are some other reasons you would fine tune an LLM model? Himanshu: The first one is task-specific adaptation. Pre-trained language models are trained on extensive and diverse data sets and have good general language understanding. They excel in language generation and comprehension tasks, though the broad understanding of language may not lead to optimal performance in specific task. These models are not task specific. So the solution is fine tuning. The fine tuning process customizes the pre-trained models for a specific task by further training on task-specific data to adapt the model's knowledge. The second reason is domain-specific vocabulary. Pre-trained models might lack knowledge of specific words and phrases essential for certain tasks in fields, such as legal, medical, finance, and technical domains. This can limit their performance when applied to domain-specific data. Fine tuning enables the model to adapt and learn domain-specific words and phrases. These words could be, again, from different domains. 18:56 Himanshu: The third reason to fine tune is efficiency and resource utilization. So fine tuning is computationally efficient compared to training from scratch. Fine tuning reuses the knowledge from pre-trained models, saving time and resources. Fine tuning requires fewer iterations to achieve task-specific competence. Shorter training cycles expedite the model development process. It conserves computational resources, such as GPU memory and processing power. Fine tuning is efficient in quicker model deployment. It has faster time to production for real world applications. Fine tuning is, again, scalable, enabling adaptation to various tasks with the same base model, which further reduce resource demands, and it leads to cost saving for research and development. The fourth reason to fine tune is of ethical concerns. Pre-trained models learns from diverse data. And those potentially inherit different biases. Fine tune might not completely eliminate biases. But careful curation of task-specific data ensures avoiding biased or harmful vocabulary. The responsible uses of domain-specific terms promotes ethical AI applications. 20:14 Lois: Thank you so much, Himanshu, for spending time with us. We had such a great time learning from you. If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on our free AI Foundations course. Nikita: Yeah, we even have a detailed walkthrough of the architecture of transformers that you might want to check out. Join us next week for a discussion on the OCI AI Portfolio. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 20:44 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Explore the latest developments in advertising on our podcast episode. Join Prem and Himanshu from Georges Blog as they discuss Amazon's new 'Sponsored TV' ad type. In this episode, we share insights from a recent three-week case study where we tested this new ad format. Tune in to gain valuable insights into the effectiveness of Sponsored TV ads and their potential impact on your advertising strategy.Learn more about the case studies here.RESOURCESRead our News Feed.Book an Amazon Advertising audit.Follow me on Twitter.Amazon design examples.Follow our team.$85 to $117k in 45 days. 2-minute breakdown of what we did.Message George.RESOURCESRead our News Feed.Book an Amazon Advertising audit.Follow me on Twitter.Amazon design examples.Follow our team.$85 to $117k in 45 days. 2-minute breakdown of what we did.Message George.
In S5 E5 I am delighted to welcome Professor Himanshu Tambe to the podcast. Himanshu's passion is to empower individuals and organisations to thrive through continuous education. He is currently Visiting Faculty at the Singapore Management University (SMU) and the Indian School of Business (ISB) teaching Design of Business, Organisation Design, Leadership and Workforce Analytics. He also operates an early-stage software product company focused on optimising operations. Prior to this, he held several senior roles with Accenture Strategy & Consulting, the last one being the Managing Director for the Talent & Organisation Consulting business in Southeast Asia and India. Before that he worked for Arthur D Little, the world's oldest consulting firm; established and operated a niche Strategy and Organisation Design company; and worked as an automobile manufacturing engineer at the very start of his career. Over a 30-year career in consulting and industry, he has proudly served more than 100 organisations across Public Sector, Metals & Mining and Banking in India, Singapore, Malaysia, Indonesia, Korea, Australia, and Europe. His work has been focused on designing and implementing Business Models, Organisation Design, Process Models, and Large-Scale Behaviour Change to deliver measurable improvements in the performance of people and organization. Over this period, Himanshu has acquired deep experience facilitating senior executive teams to execute change through vision and values alignment. Beyond the workplace he is, like me, an avid yoga practitioner and meditator and is learning jazz dance. In this conversation Himanshu shares his insights from the global business environment on the key trends shaping the future of work and workforce. We discuss modern work and role redesign, humans versus machine, data-driven change, the quest to reconnect with meaning and purpose and investing in "hinge" leadership and unfreezing the frozen middle or core work-unit leaders. Many themes will be familiar to regular listeners and ultimately we are left with more questions and a call to action to reimagine the work environment. Thank you Himanshu. Episode links:LinkedIn: https://www.linkedin.com/in/himanshutambe/ Himanshu Tambe on The ISB Leadercast Podcast https://podcasts.apple.com/au/podcast/leadercast/id1691914486?i=1000626210529Digital Health Festival Melbourne May 7/8 2024 https://digitalhealthfest.com.au/Calling all Clinician Innovators :Applications have opened for the CICA Lab Incubator program. More details here: https://www.cicalab.co/cicalab-incubator The Mind Full Medic Podcast is proudly sponsored by the MBA NSW-ACT Find out more about their service or donate today at www.mbansw.org.auDisclaimer: The content in this podcast is not intended to constitute or be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your doctor or other qualified health care professional. Moreover views expressed here are our own and do not necessarily reflect those of our employers or other official organisations.
In the fifth episode of the Kiln Rendez-Vous podcast, Edgar Roth engages with Himanshu Tyagi, Co-founder, and CTO of Witness Chain. The conversation delves into Tyagi's background and the factors that led him to explore optimistic rollups. Witness Chain specializes in providing watchtowers for rollups, DePIN, and AI Coprocessors. These programmable watchtowers enhance transaction validation on rollups by monitoring and addressing faulty transactions. The episode discusses key aspects such as an overview of Witness Chain, the significance of watchtowers in rollups, pricing and incentives for their watchtower service, insights into the future of Witness Chain and DePIN chains, and the potential for cross-chain applications.
Oracle has been actively focusing on bringing AI to the enterprise at every layer of its tech stack, be it SaaS apps, AI services, infrastructure, or data. In this episode, hosts Lois Houston and Nikita Abraham, along with senior instructors Hemant Gahankari and Himanshu Raj, discuss OCI AI and Machine Learning services. They also go over some key OCI Data Science concepts and responsible AI principles. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hey everyone! In our last episode, we dove into Generative AI and Language Learning Models. Lois: Yeah, that was an interesting one. But today, we're going to discuss the AI and machine learning services offered by Oracle Cloud Infrastructure, and we'll look at the OCI AI infrastructure. Nikita: I'm also going to try and squeeze in a couple of questions on a topic I'm really keen about, which is responsible AI. To take us through all of this, we have two of our colleagues, Hemant Gahankari and Himanshu Raj. Hemant is a Senior Principal OCI Instructor and Himanshu is a Senior Instructor on AI/ML. So, let's get started! 01:16 Lois: Hi Hemant! We're so excited to have you here! We know that Oracle has really been focusing on bringing AI to the enterprise at every layer of our stack. Hemant: It all begins with data and infrastructure layers. OCI AI services consume data, and AI services, in turn, are consumed by applications. This approach involves extensive investment from infrastructure to SaaS applications. Generative AI and massive scale models are the more recent steps. Oracle AI is the portfolio of cloud services for helping organizations use the data they may have for the business-specific uses. Business applications consume AI and ML services. The foundation of AI services and ML services is data. AI services contain pre-built models for specific uses. Some of the AI services are pre-trained, and some can be additionally trained by the customer with their own data. AI services can be consumed by calling the API for the service, passing in the data to be processed, and the service returns a result. There is no infrastructure to be managed for using AI services. 02:37 Nikita: How do I access OCI AI services? Hemant: OCI AI services provide multiple methods for access. The most common method is the OCI Console. The OCI Console provides an easy to use, browser-based interface that enables access to notebook sessions and all the features of all the data science, as well as AI services. The REST API provides access to service functionality but requires programming expertise. And API reference is provided in the product documentation. OCI also provides programming language SDKs for Java, Python, TypeScript, JavaScript, .Net, Go, and Ruby. The command line interface provides both quick access and full functionality without the need for scripting. 03:31 Lois: Hemant, what are the types of OCI AI services that are available? Hemant: OCI AI services is a collection of services with pre-built machine learning models that make it easier for developers to build a variety of business applications. The models can also be custom trained for more accurate business results. The different services provided are digital assistant, language, vision, speech, document understanding, anomaly detection. 04:03 Lois: I know we're going to talk about them in more detail in the next episode, but can you introduce us to OCI Language, Vision, and Speech? Hemant: OCI Language allows you to perform sophisticated text analysis at scale. Using the pre-trained and custom models, you can process unstructured text to extract insights without data science expertise. Pre-trained models include language detection, sentiment analysis, key phrase extraction, text classification, named entity recognition, and personal identifiable information detection. Custom models can be trained for named entity recognition and text classification with domain-specific data sets. In text translation, natural machine translation is used to translate text across numerous languages. Using OCI Vision, you can upload images to detect and classify objects in them. Pre-trained models and custom models are supported. In image analysis, pre-trained models perform object detection, image classification, and optical character recognition. In image analysis, custom models can perform custom object detection by detecting the location of custom objects in an image and providing a bounding box. The OCI Speech service is used to convert media files to readable texts that's stored in JSON and SRT format. Speech enables you to easily convert media files containing human speech into highly exact text transcriptions. 05:52 Nikita: That's great. And what about document understanding and anomaly detection? Hemant: Using OCI document understanding, you can upload documents to detect and classify text and objects in them. You can process individual files or batches of documents. In OCR, document understanding can detect and recognize text in a document. In text extraction, document understanding provides the word level and line level text, and the bounding box, coordinates of where the text is found. In key value extraction, document understanding extracts a predefined list of key value pairs of information from receipts, invoices, passports, and driver IDs. In table extraction, document understanding extracts content in tabular format, maintaining the row and column relationship of cells. In document classification, the document understanding classifies documents into different types. The OCI Anomaly Detection service is a service that analyzes large volume of multivariate or univariate time series data. The Anomaly Detection service increases the reliability of businesses by monitoring their critical assets and detecting anomalies early with high precision. Anomaly Detection is the identification of rare items, events, or observations in data that differ significantly from the expectation. 07:34 Nikita: Where is Anomaly Detection most useful? Hemant: The Anomaly Detection service is designed to help with analyzing large amounts of data and identifying the anomalies at the earliest possible time with maximum accuracy. Different sectors, such as utility, oil and gas, transportation, manufacturing, telecommunications, banking, and insurance use Anomaly Detection service for their day-to-day activities. 08:02 Lois: Ok.. and the first OCI AI service you mentioned was digital assistant… Hemant: Oracle Digital Assistant is a platform that allows you to create and deploy digital assistants, which are AI driven interfaces that help users accomplish a variety of tasks with natural language conversations. When a user engages with the Digital Assistant, the Digital Assistant evaluates the user input and routes the conversation to and from the appropriate skills. Digital Assistant greets the user upon access. Upon user requests, list what it can do and provide entry points into the given skills. It routes explicit user requests to the appropriate skills. And it also handles interruptions to flows and disambiguation. It also handles requests to exit the bot. 09:00 Nikita: Excellent! Let's bring Himanshu in to tell us about machine learning services. Hi Himanshu! Let's talk about OCI Data Science. Can you tell us a bit about it? Himanshu: OCI Data Science is the cloud service focused on serving the data scientist throughout the full machine learning life cycle with support for Python and open source. The service has many features, such as model catalog, projects, JupyterLab notebook, model deployment, model training, management, model explanation, open source libraries, and AutoML. 09:35 Lois: Himanshu, what are the core principles of OCI Data Science? Himanshu: There are three core principles of OCI Data Science. The first one, accelerated. The first principle is about accelerating the work of the individual data scientist. OCI Data Science provides data scientists with open source libraries along with easy access to a range of compute power without having to manage any infrastructure. It also includes Oracle's own library to help streamline many aspects of their work. The second principle is collaborative. It goes beyond an individual data scientist's productivity to enable data science teams to work together. This is done through the sharing of assets, reducing duplicative work, and putting reproducibility and auditability of models for collaboration and risk management. Third is enterprise grade. That means it's integrated with all the OCI Security and access protocols. The underlying infrastructure is fully managed. The customer does not have to think about provisioning compute and storage. And the service handles all the maintenance, patching, and upgrades so user can focus on solving business problems with data science. 10:50 Nikita: Let's drill down into the specifics of OCI Data Science. So far, we know it's cloud service to rapidly build, train, deploy, and manage machine learning models. But who can use it? Where is it? And how is it used? Himanshu: It serves data scientists and data science teams throughout the full machine learning life cycle. Users work in a familiar JupyterLab notebook interface, where they write Python code. And how it is used? So users preserve their models in the model catalog and deploy their models to a managed infrastructure. 11:25 Lois: Walk us through some of the key terminology that's used. Himanshu: Some of the important product terminology of OCI Data Science are projects. The projects are containers that enable data science teams to organize their work. They represent collaborative work spaces for organizing and documenting data science assets, such as notebook sessions and models. Note that tenancy can have as many projects as needed without limits. Now, this notebook session is where the data scientists work. Notebook sessions provide a JupyterLab environment with pre-installed open source libraries and the ability to add others. Notebook sessions are interactive coding environment for building and training models. Notebook sessions run in a managed infrastructure and the user can select CPU or GPU, the compute shape, and amount of storage without having to do any manual provisioning. The other important feature is Conda environment. It's an open source environment and package management system and was created for Python programs. 12:33 Nikita: What is a Conda environment used for? Himanshu: It is used in the service to quickly install, run, and update packages and their dependencies. Conda easily creates, saves, loads, and switches between environments in your notebooks sessions. 12:46 Nikita: Earlier, you spoke about the support for Python in OCI Data Science. Is there a dedicated library? Himanshu: Oracle's Accelerated Data Science ADS SDK is a Python library that is included as part of OCI Data Science. ADS has many functions and objects that automate or simplify the steps in the data science workflow, including connecting to data, exploring, and visualizing data. Training a model with AutoML, evaluating models, and explaining models. In addition, ADS provides a simple interface to access the data science service mode model catalog and other OCI services, including object storage. 13:24 Lois: I also hear a lot about models. What are models? Himanshu: Models define a mathematical representation of your data and business process. You create models in notebooks, sessions, inside projects. 13:36 Lois: What are some other important terminologies related to models? Himanshu: The next terminology is model catalog. The model catalog is a place to store, track, share, and manage models. The model catalog is a centralized and managed repository of model artifacts. A stored model includes metadata about the provenance of the model, including Git-related information and the script. Our notebook used to push the model to the catalog. Models stored in the model catalog can be shared across members of a team, and they can be loaded back into a notebook session. The next one is model deployments. Model deployments allow you to deploy models stored in the model catalog as HTTP endpoints on managed infrastructure. 14:24 Lois: So, how do you operationalize these models? Himanshu: Deploying machine learning models as web applications, HTTP API endpoints, serving predictions in real time is the most common way to operationalize models. HTTP endpoints or the API endpoints are flexible and can serve requests for the model predictions. Data science jobs enable you to define and run a repeatable machine learning tasks on fully managed infrastructure. Nikita: Thanks for that, Himanshu. 14:57 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security, artificial intelligence, and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit mylearn.oracle.com to get started. 15:25 Nikita: Welcome back! The Oracle AI Stack consists of AI services and machine learning services, and these services are built using AI infrastructure. So, let's move on to that. Hemant, what are the components of OCI AI Infrastructure? Hemant: OCI AI Infrastructure is mainly composed of GPU-based instances. Instances can be virtual machines or bare metal machines. High performance cluster networking that allows instances to communicate to each other. Super clusters are a massive network of GPU instances with multiple petabytes per second of bandwidth. And a variety of fully managed storage options from a single byte to exabytes without upfront provisioning are also available. 16:14 Lois: Can we explore each of these components a little more? First, tell us, why do we need GPUs? Hemant: ML and AI needs lots of repetitive computations to be made on huge amounts of data. Parallel computing on GPUs is designed for many processes at the same time. A GPU is a piece of hardware that is incredibly good in performing computations. GPU has thousands of lightweight cores, all working on their share of data in parallel. This gives them the ability to crunch through extremely large data set at tremendous speed. 16:54 Nikita: And what are the GPU instances offered by OCI? Hemant: GPU instances are ideally suited for model training and inference. Bare metal and virtual machine compute instances powered by NVIDIA GPUs H100, A100, A10, and V100 are made available by OCI. 17:14 Nikita: So how do we choose what to train from these different GPU options? Hemant: For large scale AI training, data analytics, and high performance computing, bare metal instances BM 8 X NVIDIA H100 and BM 8 X NVIDIA A100 can be used. These provide up to nine times faster AI training and 30 times higher acceleration for AI inferencing. The other bare metal and virtual machines are used for small AI training, inference, streaming, gaming, and virtual desktop infrastructure. 17:53 Lois: And why would someone choose the OCI AI stack over its counterparts? Hemant: Oracle offers all the features and is the most cost effective option when compared to its counterparts. For example, BM GPU 4.8 version 2 instance costs just $4 per hour and is used by many customers. Superclusters are a massive network with multiple petabytes per second of bandwidth. It can scale up to 4,096 OCI bare metal instances with 32,768 GPUs. We also have a choice of bare metal A100 or H100 GPU instances, and we can select a variety of storage options, like object store, or block store, or even file system. For networking speeds, we can reach 1,600 GB per second with A100 GPUs and 3,200 GB per second with H100 GPUs. With OCI storage, we can select local SSD up to four NVMe drives, block storage up to 32 terabytes per volume, object storage up to 10 terabytes per object, file systems up to eight exabyte per file system. OCI File system employs five replicated storage located in different fault domains to provide redundancy for resilient data protection. HPC file systems, such as BeeGFS and many others are also offered. OCI HPC file systems are available on Oracle Cloud Marketplace and make it easy to deploy a variety of high performance file servers. 19:50 Lois: I think a discussion on AI would be incomplete if we don't talk about responsible AI. We're using AI more and more every day, but can we actually trust it? Hemant: For us to trust AI, it must be driven by ethics that guide us as well. Nikita: And do we have some principles that guide the use of AI? Hemant: AI should be lawful, complying with all applicable laws and regulations. AI should be ethical, that is it should ensure adherence to ethical principles and values that we uphold as humans. And AI should be robust, both from a technical and social perspective. Because even with the good intentions, AI systems can cause unintentional harm. AI systems do not operate in a lawless world. A number of legally binding rules at national and international level apply or are relevant to the development, deployment, and use of AI systems today. The law not only prohibits certain actions but also enables others, like protecting rights of minorities or protecting environment. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications. For instance, the medical device regulation in the health care sector. In AI context, equality entails that the systems' operations cannot generate unfairly biased outputs. And while we adopt AI, citizens right should also be protected. 21:30 Lois: Ok, but how do we derive AI ethics from these? Hemant: There are three main principles. AI should be used to help humans and allow for oversight. It should never cause physical or social harm. Decisions taken by AI should be transparent and fair, and also should be explainable. AI that follows the AI ethical principles is responsible AI. So if we map the AI ethical principles to responsible AI requirements, these will be like, AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight. AI systems and environments in which they operate must be safe and secure, they must be technically robust, and should not be open to malicious use. The development, and deployment, and use of AI systems must be fair, ensuring equal and just distribution of both benefits and costs. AI should be free from unfair bias and discrimination. Decisions taken by AI to the extent possible should be explainable to those directly and indirectly affected. 23:01 Nikita: This is all great, but what does a typical responsible AI implementation process look like? Hemant: First, a governance needs to be put in place. Second, develop a set of policies and procedures to be followed. And once implemented, ensure compliance by regular monitoring and evaluation. Lois: And this is all managed by developers? Hemant: Typical roles that are involved in the implementation cycles are developers, deployers, and end users of the AI. 23:35 Nikita: Can we talk about AI specifically in health care? How do we ensure that there is fairness and no bias? Hemant: AI systems are only as good as the data that they are trained on. If that data is predominantly from one gender or racial group, the AI systems might not perform as well on data from other groups. 24:00 Lois: Yeah, and there's also the issue of ensuring transparency, right? Hemant: AI systems often make decisions based on complex algorithms that are difficult for humans to understand. As a result, patients and health care providers can have difficulty trusting the decisions made by the AI. AI systems must be regularly evaluated to ensure that they are performing as intended and not causing harm to patients. 24:29 Nikita: Thank you, Hemant and Himanshu, for this really insightful session. If you're interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. Lois: That's right, Niki. You'll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we'll get into the OCI AI Services we discussed today and talk about them in more detail. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 25:05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Culture is a big component of software engineering success. A commitment to excellence and employee satisfaction is crucial. Success begins at the top with strong, passionate leadership.
In this week's episode, Lois Houston and Nikita Abraham, along with Senior Instructor Himanshu Raj, take you through the extraordinary capabilities of Generative AI, a subset of deep learning that doesn't make predictions but rather creates its own content. They also explore the workings of Large Language Models. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! In our last episode, we went over the basics of deep learning. Today, we'll look at generative AI and large language models, and discuss how they work. To help us with that, we have Himanshu Raj, Senior Instructor on AI/ML. So, let's jump right in. Hi Himanshu, what is generative AI? 01:00 Himanshu: Generative AI refers to a type of AI that can create new content. It is a subset of deep learning, where the models are trained not to make predictions but rather to generate output on their own. Think of generative AI as an artist who looks at a lot of paintings and learns the patterns and styles present in them. Once it has learned these patterns, it can generate new paintings that resembles what it learned. 01:27 Lois: Let's take an example to understand this better. Suppose we want to train a generative AI model to draw a dog. How would we achieve this? Himanshu: You would start by giving it a lot of pictures of dogs to learn from. The AI does not know anything about what a dog looks like. But by looking at these pictures, it starts to figure out common patterns and features, like dogs often have pointy ears, narrow faces, whiskers, etc. You can then ask it to draw a new picture of a dog. The AI will use the patterns it learned to generate a picture that hopefully looks like a dog. But remember, the AI is not copying any of the pictures it has seen before but creating a new image based on the patterns it has learned. This is the basic idea behind generative AI. In practice, the process involves a lot of complex maths and computation, and there are different techniques and architectures that can be used, such as variational autoencoders (VAs) and Generative Adversarial Networks (GANs). 02:27 Nikita: Himanshu, where is generative AI used in the real world? Himanshu: Generative AI models have a wide variety of applications across numerous domains. For the image generation, generative models like GANs are used to generate realistic images. They can be used for tasks, like creating artwork, synthesizing images of human faces, or transforming sketches into photorealistic images. For text generation, large language models like GPT 3, which are generative in nature, can create human-like text. This has applications in content creation, like writing articles, generating ideas, and again, conversational AI, like chat bots, customer service agents. They are also used in programming for code generation and debugging, and much more. For music generation, generative AI models can also be used. They create new pieces of music after being trained on a specific style or collection of tunes. A famous example is OpenAI's MuseNet. 03:21 Lois: You mentioned large language models in the context of text-based generative AI. So, let's talk a little more about it. Himanshu, what exactly are large language models? Himanshu: LLMs are a type of artificial intelligence models built to understand, generate, and process human language at a massive scale. They were primarily designed for sequence to sequence tasks such as machine translation, where an input sequence is transformed into an output sequence. LLMs can be used to translate text from one language to another. For example, an LLM could be used to translate English text into French. To do this job, LLM is trained on a massive data set of text and code which allows it to learn the patterns and relationships that exist between different languages. The LLM translates, “How are you?” from English to French, “Comment allez-vous?” It can also answer questions like, what is the capital of France? And it would answer the capital of France is Paris. And it will write an essay on a given topic. For example, write an essay on French Revolution, and it will come up with a response like with a title and introduction. 04:33 Lois: And how do LLMs actually work? Himanshu: So, LLM models are typically based on deep learning architectures such as transformers. They are also trained on vast amount of text data to learn language patterns and relationships, again, with a massive number of parameters usually in order of millions or even billions. LLMs have also the ability to comprehend and understand natural language text at a semantic level. They can grasp context, infer meaning, and identify relationships between words and phrases. 05:05 Nikita: What are the most important factors for a large language model? Himanshu: Model size and parameters are crucial aspects of large language models and other deep learning models. They significantly impact the model's capabilities, performance, and resource requirement. So, what is model size? The model size refers to the amount of memory required to store the model's parameter and other data structures. Larger model sizes generally led to better performance as they can capture more complex patterns and representation from the data. The parameters are the numerical values of the model that change as it learns to minimize the model's error on the given task. In the context of LLMs, parameters refer to the weights and biases of the model's transformer layers. Parameters are usually measured in terms of millions or billions. For example, GPT-3, one of the largest LLMs to date, has 175 billion parameters making it extremely powerful in language understanding and generation. Tokens represent the individual units into which a piece of text is divided during the processing by the model. In natural language, tokens are usually words, subwords, or characters. Some models have a maximum token limit that they can process and longer text can may require truncation or splitting. Again, balancing model size, parameters, and token handling is crucial when working with LLMs. 06:29 Nikita: But what's so great about LLMs? Himanshu: Large language models can understand and interpret human language more accurately and contextually. They can comprehend complex sentence structures, nuances, and word meanings, enabling them to provide more accurate and relevant responses to user queries. This model can generate human-like text that is coherent and contextually appropriate. This capability is valuable for context creation, automated writing, and generating personalized response in applications like chatbots and virtual assistants. They can perform a variety of tasks. Large language models are very versatile and adaptable to various industries. They can be customized to excel in applications such as language translation, sentiment analysis, code generation, and much more. LLMs can handle multiple languages making them valuable for cross-lingual tasks like translation, sentiment analysis, and understanding diverse global content. Large language models can be again, fine-tuned for a specific task using a minimal amount of domain data. The efficiency of LLMs usually grows with more data and parameters. 07:34 Lois: You mentioned the “sequence to sequence tasks” earlier. Can you explain the concept in simple terms for us? Himanshu: Understanding language is difficult for computers and AI systems. The reason being that words often have meanings based on context. Consider a sentence such as Jane threw the frisbee, and her dog fetched it. In this sentence, there are a few things that relate to each other. Jane is doing the throwing. The dog is doing the fetching. And it refers to the frisbee. Suppose we are looking at the word “it” in the sentence. As a human, we understand easily that “it” refers to the frisbee. But for a machine, it can be tricky. The goal in sequence problems is to find patterns, dependencies, or relationships within the data and make predictions, classification, or generate new sequences based on that understanding. 08:27 Lois: And where are sequence models mostly used? Himanshu: Some common example of sequence models includes natural language processing, which we call NLP, tasks such as machine translation, text generation sentiment analysis, language modeling involve dealing with sequences of words or characters. Speech recognition. Converting audio signals into text, involves working with sequences of phonemes or subword units to recognize spoken words. Music generation. Generating new music involves modeling musical sequences, nodes, and rhythms to create original compositions. Gesture recognition. Sequences of motion or hand gestures are used to interpret human movements for applications, such as sign language recognition or gesture-based interfaces. Time series analysis. In fields such as finance, economics, weather forecasting, and signal processing, time series data is used to predict future values, detect anomalies, and understand patterns in temporal data. 09:35 The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community. Visit mylearn.oracle.com to get started. 10:03 Nikita: Welcome back! Himanshu, what would be the best way to solve those sequence problems you mentioned? Let's use the same sentence, “Jane threw the frisbee, and her dog fetched it” as an example. Himanshu: The solution is transformers. It's like model has a bird's eye view of the entire sentence and can see how all the words relate to each other. This allows it to understand the sentence as a whole instead of just a series of individual words. Transformers with their self-attention mechanism can look at all the words in the sentence at the same time and understand how they relate to each other. For example, transformer can simultaneously understand the connections between Jane and dog even though they are far apart in the sentence. 10:52 Nikita: But how? Himanshu: The answer is attention, which adds context to the text. Attention would notice dog comes after frisbee, fetched comes after dog, and it comes after fetched. Transformer does not look at it in isolation. Instead, it also pays attention to all the other words in the sentence at the same time. But considering all these connections, the model can figure out that “it” likely refers to the frisbee. The most famous current models that are emerging in natural language processing tasks consist of dozens of transformers or some of their variants, for example, GPT or Bert. 11:32 Lois: I was looking at the AI Foundations course on MyLearn and came across the terms “prompt engineering” and “fine tuning.” Can you shed some light on them? Himanshu: A prompt is the input or initial text provided to the model to elicit a specific response or behavior. So, this is something which you write or ask to a language model. Now, what is prompt engineering? So prompt engineering is the process of designing and formulating specific instructions or queries to interact with a large language model effectively. In the context of large language models, such as GPT 3 or Burt, prompts are the input text or questions given to the model to generate responses or perform specific tasks. The goal of prompt engineering is to ensure that the language model understands the user's intent correctly and provide accurate and relevant responses. 12:26 Nikita: That sounds easy enough, but fine tuning seems a bit more complex. Can you explain it with an example? Himanshu: Imagine you have a versatile recipe robot named chef bot. Suppose that chef bot is designed to create delicious recipes for any dish you desire. Chef bot recognizes the prompt as a request for a pizza recipe, and it knows exactly what to do. However, if you want chef bot to be an expert in a particular type of cuisine, such as Italian dishes, you fine-tune chef bot for Italian cuisine by immersing it in a culinary crash course filled with Italian cookbooks, traditional Italian recipes, and even Italian cooking shows. During this process, chef bot becomes more specialized in creating authentic Italian recipes, and this option is called fine tuning. LLMs are general purpose models that are pre-trained on large data sets but are often fine-tuned to address specific use cases. When you combine prompt engineering and fine tuning, and you get a culinary wizard in chef bot, a recipe robot that is not only great at understanding specific dish requests but also capable of following a specific dish requests and even mastering the art of cooking in a particular culinary style. 13:47 Lois: Great! Now that we've spoken about all the major components, can you walk us through the life cycle of a large language model? Himanshu: The life cycle of a Large Language Model, LLM, involves several stages, from its initial pre-training to its deployment and ongoing refinement. The first of this lifecycle is pre-training. The LLM is initially pre-trained on a large corpus of text data from the internet. During pre-training, the model learns grammar, facts, reasoning abilities, and general language understanding. The model predicts the next word in a sentence given the previous words, which helps it capture relationships between words and the structure of language. The second phase is fine tuning initialization. After pre-training, the model's weights are initialized, and it's ready for task-specific fine tuning. Fine tuning can involve supervised learning on labeled data for specific tasks, such as sentiment analysis, translation, or text generation. The model is fine-tuned on specific tasks using a smaller domain-specific data set. The weights from pre-training are updated based on the new data, making the model task aware and specialized. The next phase of the LLM life cycle is prompt engineering. So this phase craft effective prompts to guide the model's behavior in generating specific responses. Different prompt formulations, instructions, or context can be used to shape the output. 15:13 Nikita: Ok… we're with you so far. What's next? Himanshu: The next phase is evaluation and iteration. So models are evaluated using various metrics to access their performance on specific tasks. Iterative refinement involves adjusting model parameters, prompts, and fine tuning strategies to improve results. So as a part of this step, you also do few shot and one shot inference. If needed, you further fine tune the model with a small number of examples. Basically, few shot or a single example, one shot for new tasks or scenarios. Also, you do the bias mitigation and consider the ethical concerns. These biases and ethical concerns may arise in models output. You need to implement measures to ensure fairness in inclusivity and responsible use. 16:07 Himanshu: The next phase in LLM life cycle is deployment. Once the model has been fine-tuned and evaluated, it is deployed for real world applications. Deployed models can perform tasks, such as text generation, translation, summarization, and much more. You also perform monitoring and maintenance in this phase. So you continuously monitor the model's performance and output to ensure it aligns with desired outcomes. You also periodically update and retrain the model to incorporate new data and to adapt to evolving language patterns. This overall life cycle can also consist of a feedback loop, whether you gather feedbacks from users and incorporate it into the model's improvement process. You use this feedback to further refine prompts, fine tuning, and overall model behavior. RLHF, which is Reinforcement Learning with Human Feedback, is a very good example of this feedback loop. You also research and innovate as a part of this life cycle, where you continue to research and develop new techniques to enhance the model capability and address different challenges associated with it. 17:19 Nikita: As we're talking about the LLM life cycle, I see that fine tuning is not only about making an LLM task specific. So, what are some other reasons you would fine tune an LLM model? Himanshu: The first one is task-specific adaptation. Pre-trained language models are trained on extensive and diverse data sets and have good general language understanding. They excel in language generation and comprehension tasks, though the broad understanding of language may not lead to optimal performance in specific task. These models are not task specific. So the solution is fine tuning. The fine tuning process customizes the pre-trained models for a specific task by further training on task-specific data to adapt the model's knowledge. The second reason is domain-specific vocabulary. Pre-trained models might lack knowledge of specific words and phrases essential for certain tasks in fields, such as legal, medical, finance, and technical domains. This can limit their performance when applied to domain-specific data. Fine tuning enables the model to adapt and learn domain-specific words and phrases. These words could be, again, from different domains. 18:35 Himanshu: The third reason to fine tune is efficiency and resource utilization. So fine tuning is computationally efficient compared to training from scratch. Fine tuning reuses the knowledge from pre-trained models, saving time and resources. Fine tuning requires fewer iterations to achieve task-specific competence. Shorter training cycles expedite the model development process. It conserves computational resources, such as GPU memory and processing power. Fine tuning is efficient in quicker model deployment. It has faster time to production for real world applications. Fine tuning is, again, a scalable enabling adaptation to various tasks with the same base model, which further reduce resource demands, and it leads to cost saving for research and development. The fourth reason to fine tune is of ethical concerns. Pre-trained models learns from diverse data. And those potentially inherit different biases. Fine tune might not completely eliminate biases. But careful curation of task specific data ensures avoiding biased or harmful vocabulary. The responsible uses of domain-specific terms promotes ethical AI applications. 19:53 Lois: Thank you so much, Himanshu, for spending time with us. We had such a great time learning from you. If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on our free AI Foundations course. Nikita: Yeah, we even have a detailed walkthrough of the architecture of transformers that you might want to check out. Join us next week for a discussion on the OCI AI Portfolio. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 20:24 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Carers are people who provide unpaid care and support to someone who needs help with their day-to-day living, who has a disability, mental health condition, or any health condition requiring care and assistance. Carer Gateway is an Australian Government program providing support to carers. The program's digital photographic exhibition 'Real Carers Real Stories - In Their Own Words' features ten carers. Aakriti Chhetri is one of the unpaid carers. She provides support to her friends Himanshu and Neha who live with muscular dystrophy. Chhetri spoke to SBS Nepali about her experience of working as a carer in a new country, the rewards, and the learnings from her caregiving journey. - तपाईँ अस्ट्रेलियामा बस्नु हुन्छ भने तपाईँले 'केयरर' भन्ने शब्द सुन्नु भएको नै होला। केयरर अर्थात् स्याहारकर्ता भनेर ती व्यक्तिहरूलाई चिनिन्छ जसले अशक्तता, मानसिक समस्या वा कुनै किसिमको स्वास्थ्य समस्या भएका व्यक्तिहरूको स्याहार गर्दछन्। यस्तै केयररहरूको अनुभव सँगालेर अस्ट्रेलिया सरकार अन्तर्गत रहेको 'केयरर गेटवे'ले 'रियल केयरर्स रियल स्टोरिज - इन देयर ओन वर्ड्स' नामक एक डिजिटल फोटो प्रदर्शनी गरेको छ। त्यसमा एडिलेडकी आकृति क्षेत्रीलाई पनि देख्न सकिन्छ। छ वर्ष अघि विद्यार्थीको रूपमा अस्ट्रेलिया आएकी क्षेत्री, हाल एडिलेडको एक एज्ड केयरमा कम्युनिटी प्रोजेक्ट अफिसरको रूपमा कार्यरत छिन्। उनी सिड्नीमा रहेका आफ्ना दुई जना साथीहरूको केयरर हुन्। केयरर हुँदाको आफ्नो अनुभव र यसरी कसैलाई स्याहार गरिरहेका मानिसहरूका लागि केही सल्लाह सुझाव सहित क्षेत्रीले एसबीएस नेपालीसँग गरेको कुराकानी सुन्नुहोस्।
In this episode of Molecule to Market, you'll go inside the outsourcing space of the global drug development sector with Himanshu Gadgil, CEO at Enzene Biosciences. Your host, Raman Sehgal, discusses the pharmaceutical and biotechnology supply chain with Himanshu, covering: How a personal tragedy led him from the US to India on a mission to make an impact Shifting from a technical to commercial focus to launch several biosimilars in India and beyond Being at the inception of a big pharma biotech spin-out focused on building a platform of innovation that contributes to society Taking its cost-effective, continuous manufacturing platform from India to the US via a CDMO vertical focused on novel biologics Dr. Himanshu Gadgil serves as the CEO at Enzene Biosciences Ltd. Under his services, Enzene has grown from a start-up biotech to a multi-vertical, multi-site product development and manufacturing service-based biopharmaceutical company. Prior to Enzene, he worked as the Sr. Vice President at Intas Pharmaceutical Ltd. where he was instrumental in turning around the commercial product pipeline by launching several biosimilar products in multiple geographies. During his stint in the US, he led different facets of process and product development at Amgen, spearheading IND, BLA, and Market authorizations of various blockbuster biotech products. At the inception of his career, he joined Waters Corporation, where he pioneered the development of QBD, enabling multi-attribute methodologies for biopharmaceutical characterization. Himanshu holds a Ph.D. in Biochemistry from the University of Tennessee and is a passionate scientific leader and innovator with over 50 publications and patents. Please subscribe, tell your industry colleagues and join us in celebrating and promoting the value and importance of the global life science outsourcing space. We'd also appreciate a positive rating! Molecule to Market is sponsored and funded by ramarketing, an international marketing, design, digital and content agency helping companies differentiate, get noticed and grow in life sciences.
Chef Himanshu Saini is one of Dubai's most acclaimed chefs. His mission is to change the perception of Indian cuisine and elevate how we experience Indian food today. His restaurant, Trèsind Studio, is his ode to the culinary legacy of his roots and his perspective on Indian cuisine. We will hear about Himanshu Saini's childhood in an agricultural family in India, with plenty of fresh produce, herbs, and flowers used in the kitchen. At the end of the podcast he will reveal his favourite restaurant recommendations in Dubai, India and in the rest of the world. All of the recommendations mentioned in this podcast and thousands more are available for free in the World of Mouth app: https://www.worldofmouth.app/ Hosted on Acast. See acast.com/privacy for more information.
Last week, co-host Tiffany Eslick spoke to Chef Himanshu Saini about his cooking inspirations and the story behind his iconic restaurants in Dubai. On this bonus episode, we're sharing all the delicious details behind the Rising India menu at TresInd Studio.
This week, Tiffany sits down with Chef Himanshu Saini, the chef behind the award-winning Tresind Studio in Dubai. Tresind originally opened in 2014 with a take on a progressive Indian fine-dining experience. Since then, they've opened more restaurants and brands—for example, Carnival by Tresind and the all-vegetarian Avatara, among others. Their flagship restaurant, Tresind Studio, now located in Palm Jumeirah, earned two Michelin stars this year. Chef Himanshu shares the inspirations behind his cooking, and why there's a framed dinner plate paired with all the awards they've won.
Join me as I sit down with the founders of Arch Lending, delving into the intricate world of crypto lending and blockchain financial services. From understanding the importance of qualified custodians to the tax implications of crypto-backed loans, this interview is a treasure trove of insights for both newcomers and seasoned crypto enthusiasts. Plus, get a sneak peek into Arch Lending's upcoming features and their vision for the future.
Climate tech is one of the emerging areas in the technology landscape offering a variety of new-age solutions for adapting to changing climate. Largely unexplored, climate tech has the potential to empower even the grassroot sectors with the ability to forecast and adapt to extreme weather and unprecedented climatic events. In this unique episode, we talked with Himanshu Gupta, the co-founder of Climate AI, who explained the various facets of climate tech, the emerging AI tools, and how he is deploying it in the sectors he operates. From agri to commodity supply chain, Gupta talked about the various tech solutions that AI can offer for safeguarding against climate risks. Full transcript of the episode is available in English and Hindi Presented by 101Reporters Follow TIEH podcast on Twitter, Linkedin & YouTube Himanshu Gupta is on Twitter & Linkedin Our hosts, Shreya Jai on Twitter, Linkedin & Dr. Sandeep Pai on Twitter, Linkedin Podcast Producer, Tejas Dayananda Sagar on Twitter & Linkedin
Himanshu kept seeing paisley everywhere: from his home temple to his couch cushions to his mom's pants. Some of these products were Indian but a lot weren't. How did this pattern get embraced by the world? See articlesofinterest.substack.com for images, links and more.
Lucknow may surely be known for its Nazhakhat and Nawaabi Andaaz, but this capital of Uttar Pradesh holds more flavor and richness than just the Laziz Zaika of the Awadhi Cuisine. Guided by an incredible treasure trove of poetry, an amazing storyteller, bestselling author of the book "Qissa Qissa", a columnist, and a journalist, Himanshu Bajpai, is here to unveil the remarkable tales of 'The City Of Nawabs', Lucknow. Immerse yourself in the rich history through the eyes and heart of Himanshu as he unravels its hidden gems through the ancient art of dastangoi, the mesmerizing Urdu form of oral storytelling. Himanshu, a true master of dastan, transports us back in time, vividly painting the portraits of Lucknow's valiant past. Straight from the vicinity of the 'Residency', Himanshu's extensive knowledge takes us on a journey like no other. With his passion for storytelling and his vast reservoir of Urdu and Hindi poetry, Himanshu breathes life into the character of Lucknow, ensuring its legacy lives on. In the 7th episode of, "The Billion Dreams," prepare to be captivated by the magic of this Lucknawi andaaz as Himanshu Bajpai shares the soul-stirring stories in his unique storytelling style.If you enjoyed the episode, don't forget to leave a like and hearts in the comments below.Do share your thoughts on the video and if there's any fact about Lucknow that we missed out on, do let us know. Share this episode with your friends and family and let them know about the incredible story of Lucknow. Subscribe for more such conversations.Subscribe to the podcast. Stay connected and keep yourself inspired with new thoughts.Be a part of the dialogue. Click here: https://linktr.ee/AshishvidyarthiAlshukran Bandhu,Alshukran Zindagi.--------Follow Himanshu on:YouTube: https://www.youtube.com/c/himanshubajpaiofficialInstagram: https://www.instagram.com/himanshu.lakhnauaa--------Topics: 01:08 Who is Himanshu Bajpai? 01:44 History of British Residency02:55 Battle of Chinhat 04:43 Sher (Couplet) of Bashir Badr05:00 Origin Story of "Achhe Achho Ko Paani Pilana"06:44 Anecdote of Majaz Lakhnawi07:35 Who is a true Lucknawi? 07:43 A satire from Kanhaiya Lal Kapoor 08:31 Sher on Zubaan 08:58 Zamane Ki Baat 09:23 Famous Story of Dal from Abdul Halim Sharar14:27 'Sher O Shayari' on the spot#AshishVidyarthi #HimanshuBajpai #Lucknow #Podcast #TheBillionDreams #Shayari #Sher #Lucknawi #Hindi #Urdu
Himanshu Palsule is the CEO of Cornerstone OnDemand- the billion dollar leader in talent management and learning technology. Previously, he held C-suite positions at Epicor Software, Sage Software, Open Systems International, HCL, and others. Himanshu is an alum of University of Saint Thomas and Manipal Institute of Technology. He is a Board member at several public, private companies, and nonprofits. --- Support this podcast: https://podcasters.spotify.com/pod/show/theindustryshow/support
Himanshu Sahay & Dhruv Patel are the Co-Founders of Arch (www.archlending.com). Backed by Tribe Capital, Picus Capital, & more, Arch is the most secure way to borrow against alternative assets. Users can access instant credit without selling your crypto or equity stake, on the platform with best-in-class security and a regulatory-first approach. In this episode we discuss their policy to not “touch or reloan customer funds for any reason,” their state-by-state approach to getting regulatory compliance right, their thoughts on how to best approach building in crypto right now, and much more.Recorded Monday May 8th, 2023.
Arch cofounders Dhruv Patel and Himanshu Sahay Arch coming out of stealth Key lessons learned from the crypto lending crisis Arch's differentiated offering within crypto and other alternative assets Leveraging crypto rails to build a lender of the future Using experience at top technology companies to build a superior consumer product Learn more at Archlending.com and on Twitter
Today, Dee and Anand discuss the latest business and financial news, including Microsoft's decision to lay off 10,000 employees, the reported plummet in Shein's valuation as they seek $3 billion, and the Oxfam report stating that the richest 1% of people amassed almost two-thirds of new wealth created in the last two years. They also discuss recent developments in the world of pop culture, national news, politics, and world news, including Dr. Dre selling music assets to Universal Music and Shamrock, police surveillance in Beverly Hills, and Donald Trump's potential return to Twitter and Facebook. Plus, they dive into the world of fintech and crypto, discussing Arch's new crypto lending product and the gathering of prostitutes in Davos for the annual meeting of the global elite. Lastly. the gentlemen drop their weekly Winners, Losers, and Content. - written by ChatGPT Timeline of What Was Discussed: Microsoft's strange decision. (3:12) Netflix's subscriptions are popping! (6:30) The big challenge for Shein. (10:46) How the richest 1% of people amassed almost two-thirds of new wealth created in the last two years. (18:12) Peter Thiel is doing WELL. (23:02) Dr. Dre is cashing out! (25:53) The Beverly Hills police are watching you. (32:13) The Don is back! (36:33) The strange concept of Davos. (38:11) A Group Chat exclusive interview with Dhruv and Himanshu from Arch. (41:12) Winners, Losers, and Content. (58:12) Group Chat Shout Outs. (1:05:34) Related Links/Products Mentioned Chatty Kathy Club Microsoft is laying off 10,000 employees Reed Hastings steps down as Netflix CEO amid subscriber gains Shein valuation reportedly plummets by a third as it seeks $3B The richest 1% of people amassed almost two-thirds of new wealth created in the last two years, Oxfam has said. - unusual_whales on Twitter Trump backer Peter Thiel reportedly made $1.8 billion cashing out an 8-year bet on crypto – when he was still touting a massive bitcoin price surge Dr. Dre Selling Music Assets to Universal Music and Shamrock In (and Above) Beverly Hills, Police Are Watching BREAKING: Donald Trump is preparing to come back to Twitter and Facebook, per NBC News Prostitutes gather in Davos for annual meeting of global elite Fintech Firm Arch Starts Crypto Lending Product, Raises $2.75M Arch - The Easiest Way to Borrow Against Crypto and NFTs Musk's Twitter Saw Revenue Drop 35% in Q4, Sharply Below Projections What to know about extraordinary measures as debt ceiling hits Zerofux Clothing Co Connect with Group Chat! Watch The Pod #1 Newsletter In The World For The Gram Tweet With Us Exclusive Facebook Content We're @groupchatpod on Snapchat
Himanshu Palsule is the CEO of Cornerstone, a 4,000 person talent experience platform with thousands of business customers and millions of users around the world. Himanshu and his wife came to the U.S. from Bombay in the 80's for a one year project at IBM that they both worked on, and they never left. In today's discussion we take a look at some pretty tough topics, including cancel culture, if workers today have a sense of loyalty, pride, and work ethic, if it's crazy that we have to entice people to go back to work, and how leadership and the workplace as a whole are both changing. We also explore some of the big trends Himanshu is paying attention to and why he is so fascinated with quantum physics. ------------------ Get ad-free listening, early access to new episodes and bonus episodes with the subscription version of the show The Future of Work Plus. To start it will only be available on Apple Podcasts and it will cost $4.99/month or $49.99/year, which is the equivalent to the cost of a cup of coffee. ________________ Over the last 15 years, I've had the privilege of speaking and working with some of the world's top leaders. Here are 15 of the best leadership lessons that I learned from the CEOs of organizations like Netflix, Honeywell, Volvo, Best Buy, The Home Depot, and others. I hope they inspire you and give you things you can try in your work and life. Get the PDF here. --------------------- Get the latest insights on the Future of Work, Leadership and employee experience through my daily newsletter at futureofworknewsletter.com Let's connect on social! Linkedin: http://www.linkedin.com/in/jacobmorgan8 Instagram: https://instagram.com/jacobmorgan8 Twitter: http://www.twitter.com/jacobm Facebook: https://www.facebook.com/FuturistJacob
The Afters with Himanshu “Heems” Suri On our new weekly lightning round mini ep with Himanshu “Heems” Suri, we're fucking around with the Grammys, Stoffa, reliving your golden years, erasing Das Racist from history, Nav's barber, Danny Brown's orthodontist, being a known racist, only eating Pizza Hut and Taco Bell, collabing with Rammstein vs. Insane Clown Posse, white people, Ezra Koening, Indian food and much more. For more Throwing Fits, check us out on Patreon: www.patreon.com/throwingfits.
The best rapper on this pod, first coolest. This week, the boys are welcoming the man himself, Himanshu “Heems” Suri, to the stu. Heems came through to spit bars on why exactly Das Racist broke up, fumbling the bag, tour life pre-social media, the curse of being ahead of your time, the enduring legacy of “Combination Pizza Hut and Taco Bell”, getting roofied in Japan, the joke rap label, learning how to take care of your mental health, memories of Anthony Bourdain, the best Indian food in Queens, leaving rap for the corporate world of a 9 to 5, ambitions of a retired rapper, whether or not Nav stole his swag, the culture shock of attending Wesleyan, going to high school with James, gangs of Stuyvesant High, supporting Arsenal, blowing bags at Stoffa, a very special TF freestyle and much more on this deep and dynamic episode of The Only Podcast That Matters™. For more Throwing Fits, check us out on Patreon: www.patreon.com/throwingfits. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app