POPULARITY
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Michael McCourt the head of engineering at SigOpt. In our conversation with Michael, we explore the vast space around the topic of optimization, including the technical differences between ML and optimization and where they're applied, what the path to increasing complexity looks like for a practitioner and the relationship between optimization and active learning. We also discuss the research frontier for optimization and how folks think about the interesting challenges and open questions for this field, how optimization approaches appeared at the latest NeurIPS conference, and Mike's excitement for the emergence of interdisciplinary work between the machine learning community and other fields like the natural sciences. The complete show notes for this episode can be found at twimlai.com/go/545
In this Intel Conversations in the Cloud audio podcast: S. Venkat from MindTree joins host Jake Smith to talk about digital customer service, specifically the company's MindTree Cognitive Contact Center, which combines existing call center infrastructure with new technology. The two discuss how MindTree worked with the Intel Distribution of Python along with SigOpt (a […]
S. Venkat from Mindtree joins host Jake Smith to talk about digital customer service, specifically the company's Mindtree Cognitive Contact Center, which combines existing call center infrastructure with new technology. The two discuss how Mindtree worked with the Intel Distribution of Python along with SigOpt (a recent Intel acquisition) for hyperparameter tuning to reach necessary accuracy and performance benchmarks. For more information, visit: https://www.mindtree.com/ Follow S. Venkat on LinkedIn at: https://www.linkedin.com/in/venkateswaran-sundareswaran-69a6363 Follow Jake on Twitter at: https://twitter.com/jakesmithintel
In this Intel Conversations in the Cloud audio podcast: S. Venkat from MindTree joins host Jake Smith to talk about digital customer service, specifically the company's MindTree Cognitive Contact Center, which combines existing call center infrastructure with new technology. The two discuss how MindTree worked with the Intel Distribution of Python along with SigOpt (a […]
In this Intel Conversations in the Cloud audio podcast: S. Venkat from MindTree joins host Jake Smith to talk about digital customer service, specifically the company's MindTree Cognitive Contact Center, which combines existing call center infrastructure with new technology. The two discuss how MindTree worked with the Intel Distribution of Python along with SigOpt (a […]
In this Intel Conversations in the Cloud audio podcast: S. Venkat from MindTree joins host Jake Smith to talk about digital customer service, specifically the company's MindTree Cognitive Contact Center, which combines existing call center infrastructure with new technology. The two discuss how MindTree worked with the Intel Distribution of Python along with SigOpt (a […]
An A.I. the model is similar to a boat in that it needs constant maintenance to perform. The reality is A.I. models need adjusted boundaries and guidelines to remain efficient. And when you live in a world where everyone is trying to get bigger and faster and have a certain edge, Scott Clark is helping make that possible with his finely-tuned A.I. modeling techniques.“As you're building up these rules and constructs for how that system will even learn itself, there's a lot of parameters that you need to set and tune. There's all these magical numbers that go into these systems. If you don't have a system of record for this, if you're just throwing things against the wall and seeing what sticks, and then only checking the best one, and you don't have a system of what you tried, what the trade-offs were, which parameters were the most important, and how it traded off different metrics it can seem like a very opaque process. At least that hyper parameter optimization and neural architecture search and kind of tuning part of the process can be a little bit more explainable, a little bit more repeatable and a little bit more optimal.”More explainable, and more optimal, but most importantly scaleable and reproducible. On this episode of IT Visionaries, Scott, the CEO and Co-founder of SigOpt, a company that's on a mission to empower modeling systems to reach their fullest potential, explains the basic steps that go into successful models, how his team tweaks and optimizes those models to build more efficient processes. Plus, Scott touches on the future of algorithmic models — including how they will improve and where they struggle. Enjoy this episode.Main TakeawaysBad Data, Bad!: When you're building algorithm models you have to not only focus on the data you are putting into those models, but you have to know where that data is coming from and if that data is trustworthy. When you have untrustworthy data — either its coming from an unknown source or is bias in any way — this can lead to models that deliver poor results.Delivering Consistency: While every algorithm needs to be tweaked and tuned at the start, the best way to deliver consistent, scalable algorithmic models is to make sure you are able to define hyper-specific patterns that the algorithm can abide by. When algorithms know what rules they are looking for (such as this person only likes medium sized shirts with stripes) it has a set of hyper-specific boundaries it can operate off of in order to deliver the best results.Where is the Band Conductor?: Algorithms will continue to infiltrate our everyday lives, but the truth is they still need humans to effectively run them, to tune them, and to make sure that the decisions they are making are the right ones. ---IT Visionaries is brought to you by the Salesforce Platform - the #1 cloud platform for digital transformation of every experience. Build connected experiences, empower every employee, and deliver continuous innovation - with the customer at the center of everything you do. Learn more at salesforce.com/platform
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. In our conversation with Gustavo, we explore his paper Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios. The complete show notes for this episode can be found at twimlai.com/go/505.
In this Intel Conversations in the Cloud audio podcast: SigOpt cofounder and CEO Scott Clark joins host Jake Smith to discuss his company's recent acquisition by Intel, how this acquisition accelerates their ability to meet their mission to empower AI engineers, and the latest innovation in the SigOpt intelligent experimentation platform that enables faster design, […]
SigOpt cofounder and CEO Scott Clark joins host Jake Smith to discuss his company's recent acquisition by Intel, how this acquisition accelerates their ability to meet their mission to empower AI engineers, and the latest innovation in the SigOpt intelligent experimentation platform that enables faster design, exploration, and optimization of models. Scott goes into detail about how SigOpt software helps developers track their modeling artifacts, design experiments, explore their parameter space, and manage hyper parameter optimization (HPO). Jake and Scott close the episode by talking about the future of AI. Learn more about SigOpt at: https://sigopt.com/ Follow Jake on Twitter at: https://twitter.com/jakesmithintel
In this Intel Conversations in the Cloud audio podcast: SigOpt cofounder and CEO Scott Clark joins host Jake Smith to discuss his company's recent acquisition by Intel, how this acquisition accelerates their ability to meet their mission to empower AI engineers, and the latest innovation in the SigOpt intelligent experimentation platform that enables faster design, […]
In this Intel Conversations in the Cloud audio podcast: SigOpt cofounder and CEO Scott Clark joins host Jake Smith to discuss his company's recent acquisition by Intel, how this acquisition accelerates their ability to meet their mission to empower AI engineers, and the latest innovation in the SigOpt intelligent experimentation platform that enables faster design, […]
An A.I. the model is similar to a boat in that it needs constant maintenance to perform. The reality is A.I. models need adjusted boundaries and guidelines to remain efficient. And when you live in a world where everyone is trying to get bigger and faster and have a certain edge, Scott Clark is helping make that possible with his finely-tuned A.I. modeling techniques.“As you're building up these rules and constructs for how that system will even learn itself, there's a lot of parameters that you need to set and tune. There's all these magical numbers that go into these systems. If you don't have a system of record for this, if you're just throwing things against the wall and seeing what sticks, and then only checking the best one, and you don't have a system of what you tried, what the trade-offs were, which parameters were the most important, and how it traded off different metrics it can seem like a very opaque process. At least that hyper parameter optimization and neural architecture search and kind of tuning part of the process can be a little bit more explainable, a little bit more repeatable and a little bit more optimal.”More explainable, and more optimal, but most importantly scaleable and reproducible. On this episode of IT Visionaries, Scott, the CEO and Co-founder of SigOpt, a company that's on a mission to empower modeling systems to reach their fullest potential, explains the basic steps that go into successful models, how his team tweaks and optimizes those models to build more efficient processes. Plus, Scott touches on the future of algorithmic models — including how they will improve and where they struggle. Enjoy this episode.Main TakeawaysBad Data, Bad!: When you're building algorithm models you have to not only focus on the data you are putting into those models, but you have to know where that data is coming from and if that data is trustworthy. When you have untrustworthy data — either its coming from an unknown source or is bias in any way — this can lead to models that deliver poor results.Delivering Consistency: While every algorithm needs to be tweaked and tuned at the start, the best way to deliver consistent, scalable algorithmic models is to make sure you are able to define hyper-specific patterns that the algorithm can abide by. When algorithms know what rules they are looking for (such as this person only likes medium sized shirts with stripes) it has a set of hyper-specific boundaries it can operate off of in order to deliver the best results.Where is the Band Conductor?: Algorithms will continue to infiltrate our everyday lives, but the truth is they still need humans to effectively run them, to tune them, and to make sure that the decisions they are making are the right ones. ---IT Visionaries is brought to you by the Salesforce Platform - the #1 cloud platform for digital transformation of every experience. Build connected experiences, empower every employee, and deliver continuous innovation - with the customer at the center of everything you do. Learn more at salesforce.com/platform
Martin Casado and Sonal Chokshi explore what makes a great VP of Engineering at startups! You’ll hear how successful VPEs are evaluated, the ideal experience and success criteria. You’ll hear rapid-fire responses covering how to scale yourself, KPIs, the ideal VPE hiring time for startups, and what VPEs should definitely NOT do. MARTIN CASADO, GENERAL PARTNER @ ANDREESSEN HOROWITZ He was previously cofounder and CTOr at Nicira (acquired by VMware for $1.26 billion). At VMware, Martin was SVP & GM of the Networking and Security Business Unit (which he scaled to a $600 million run-rate). Martin’s early career was at Lawrence Livermore National Laboratory working on large-scale simulations for the Department of Defense, networking, and cybersecurity. He holds both a PhD and Masters degree in Computer Science from Stanford University, where he created the software-defined networking (SDN) movement and cofounded Illuminics Systems (acquired by Quova). He’s been awarded both the ACM Grace Murray Hopper award and the NEC C&C award, and he’s an inductee of the Lawrence Livermore Lab’s Entrepreneur’s Hall of Fame. Martin serves on the board of: ActionIQ, Astranis, DeepMap, Imply, Kong, Pindrop Security, RapidAPI, SigOpt, and Yubico. “In my experience over a number of engineering leaders is whether or not they're a good engineer is totally orthogonal to the actual role. And in fact, someone that's deeply passionate about a particular architecture technology or approach can be very damaging because you have a power asymmetry in the team.” - Martin Casado SONAL CHOKSHI, EDITOR IN CHIEF @ ANDREESSEN HOROWITZ AKA "a16z" Sonal built and oversees all of Andreessen Horowitz’s editorial operations, including showrunning and hosting the a16z Podcast, leading production of the a16z Crypto Canon; and more. Prior to a16z Sonal was a Senior Editor at Wired. Prior to that, Sonal was responsible for content and community at Xerox PARC. Before moving back to California from NYC, Sonal was doing graduate work in developmental and cognitive psychology at Columbia University's school of education and worked as a researcher "ethnographer" on NSF grants around teacher professional development and early numeracy. She studied English and Psychology at UCLA. SHOWNOTES Hiring misconceptions & Why VPs of Engineering are so valuable (4:21) Ideal experience and success criteria for a Startup VPE (8:45) Does a VPE need to be a good engineer? (10:56) Two key areas VPEs are evaluated (13:01) How to balance product and engineering as a VPE (17:34) The hard issue of managing people (19:54) Why engineering analytics and conscious decisions are important to building great engineering orgs (22:24) Good KPIs and how to scale yourself as a VPE (24:19) When is the right time to become a VPE at a startup? (26:25) What a VPE should NOT do (27:44) What is a CTO’s role and how do you work with them as a VPE? (31:00) Takeaways (32:0) LINKS a16z Podcast ANNOUNCEMENT The ELC Summit has arrived! This is our annual conference to celebrate and inspire great leadership in engineering. Head to ELCSUMMIT.COM for more details! --- Send in a voice message: https://anchor.fm/engineeringleadership/message
On the latest episode of AI at Work we hear from Scott Clark, the co-founder and CEO of SigOpt. Tune in to learn more on Scott's history as well as how SigOpt came to be. Additionally, Scott discusses why SigOpt, an optimization platform founded in 2014, went into Y Combinator. ~ Check out Scott & SigOpt ~ Scott: https://www.linkedin.com/in/sc932/ SigOpt: https://sigopt.com
Nick Payton, B2B market leader at Sigopt talks about the typical steps in AI/machine learning model development, how Sigopt helps some of the most advanced companies using AI to do better models, faster and key trends in using AI in the industry,
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo! This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss: • The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling. • How Two Sigma has attacked the experimentation challenge with their platform. • The relationship between the optimization and infrastructure teams at SigOpt. • What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so. The complete show notes can be found at twimlai.com/talk/273. Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST! Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2. Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss: • Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes." • Stripe’s machine learning infrastructure journey, including their start from a production focus. • Internal tools used at Stripe, including Railyard, an API built to manage model training at scale & more! The complete show notes can be found at twimlai.com/talk/272. Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST! Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2. Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter & Tech Lead for Machine Learning Core Environment at Twitter Cortex. In our conversation, we cover: • The machine learning landscape at Twitter, including with the history of the Cortex team • Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0. • The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more! The complete show notes can be found at twimlai.com/talk/271. Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST! Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2. Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt. Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Alex Ratner, Ph.D. student at Stanford. In our conversation, we discuss: • Snorkel, the open source framework that is the successor to Stanford's Deep Dive project. • How Snorkel is used as a framework for creating training data with weak supervised learning techniques. • Multiple use cases for Snorkel, including how it is used by large companies like Google. The complete show notes can be found at twimlai.com/talk/270. Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2. Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt. Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss: • The beginning and gradual scaling up of TRI's platform. • Their distributed deep learning methods, including their use of stock Pytorch. • Applying devops to their research infrastructure, and much more! The complete show notes for this episode can be found at twimlai.com/talk/269. Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt. Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!
Data, data, everywhere, nor any drop to drink. Or so would say Coleridge, if he were a big company CEO trying to use A.I. today -- because even when you have a ton of data, there's not always enough signal to get anything meaningful from AI. Why? Because, "like they say, it's 'garbage in, garbage out' -- what matters is what you have in between," reminds Databricks co-founder (and director of the RISElab at U.C. Berkeley) Ion Stoica. And even then it's still not just about data operations, emphasizes SigOpt co-founder Scott Clark; your data scientists need to really understand "What's actually right for my business and what am I actually aiming for?" And then get there as efficiently as possible. But beyond defining their goals, how do companies get over the "cold start" problem when it comes to doing more with AI in practice, asks a16z operating partner Frank Chen (who also released a microsite on getting started with AI earlier this year)? The guests on this short "a16z Bytes" episode of the a16z Podcast -- based on a conversation that took place at our recent annual Summit event -- share practical advice about this and more.
When you have “a really hot, frothy space” like AI, even the most basic questions — like what is it good for, how do you make sure your data is in shape, and so on — aren't answered. This is just as true for the companies eager to adopt the technology and get into the space, as it is for those building companies around that space, observes Joe Spisak, Head of Partnerships at Amazon Web Services. “People treat it like magic,” adds a16z general partner Martin Casado. This magical realism is especially true of AI, because by definition — i.e., machines learning — there is a bit of a “black box” between what you put in and what you get out of it. Which may be fine… Except when you have to completely change the data being fed into that black box, or you're shooting for a completely different target to come out of it. That's why, observes Scott Clark, CEO and co-founder of SigOpt, “an untuned, sophisticated system will underperform a tuned simple system” almost every time. So what does this mean for organizations going from so-called “toy” problems in R&D to real business results tied to KPIs and ROI? In this episode of the a16z Podcast, Casado, Clark, and Spisak (in conversation with Sonal Chokshi) share their thoughts on what's happening and what's needed for AI in practice, given their vantage points working with both large companies and AI startups. What does it mean for data scientists and domain experts? For differentiation and advantage? Because even though we finally have widely available building blocks for AI, we need the scaffolding too… and only then can we build something powerful on top of it.
Join us as we explore how SigOpt is using AWS to optimize machine learning and AI pipelines. SigOpt is an Optimization-as-a-Service platform that seamlessly tunes model configuration parameters through via an ensemble of optimization algorithms behind a simple API. This results in captured performance that may otherwise be left on the table by conventional techniques while also reducing the time and cost for developing and optimizing new models. In this session, you will learn how SigOpt has optimized ML and AI pipelines on Amazon EC2 instances for various algorithms (including the latest generation of GPU optimized instances), as well as how they internally leverage AWS to build their flexible and scalable platform and evaluation framework
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. While I was there, I had just enough time to sneak away and catch up with Scott Clark, Co-Founder and CEO of Sigopt, a company whose software is focused on automatically tuning your model’s parameters through Bayesian optimization. We dive pretty deeply into that process through the course of this discussion, while hitting on topics like Exploration vs Exploitation, Bayesian Regression, Heterogeneous Configuration Models and Covariance Kernels. I had a great time and learned a ton, but be forewarned, this is most definitely a Nerd Alert show! Notes for this show can be found at twimlai.com/talk/50
In this episode of the ARCHITECHT AI Show, SigOpt co-founder and CEO Scott Clark talks about the how and why his company is delivering "optimization as a service" to machine learning models. While deep learning models are powerful, for example, achieving optimal performance can require tuning hundreds of hyperparameters—a time-consuming process when done manually. Clark also discusses the business of selling into the AI workflow; the very-necessary gap between academic research and production systems; and his time kicking off the Yelp Dataset Challenge. In the news segment, co-hosts Derrick Harris, Signe Brewster and Chris Albrecht cover John Deere's acquisition of Blue River Technology; Drive.ai's partnership with Lyft; Facebook and Microsoft new AI standard; and what to make of the new MIT-IBM Watson AI Lab.
The Top Entrepreneurs in Money, Marketing, Business and Life
Scott Clark. He’s the co-founder and CEO of SigOpt, a Y-Combinator and Andreessen Horowitz backed, optimization as a service startup. Scott has been applying after-learning technologies in industry and academia for years. He holds a PhD for applied mathematics and an MS in computer science from Cornell University and a BS degree in mathematics, physics and computational physics from Oregon State University. He was chosen as one of Forbes' 30 under 30 in 2016. Famous Five: Favorite Book? – The Hard Thing About Hard Things What CEO do you follow? – Marc Andreessen and Ben Horowitz and Phil Knight Favorite online tool? — Gmail and Slack How many hours of sleep do you get?— 8 If you could let your 20-year old self know one thing, what would it be? – Scott would tell himself that it doesn’t get easier, so set up habits and processes to make things sustainable when you have the time and ability to do it because that will definitely help once things ramp up Time Stamped Show Notes: 00:44 – Nathan introduces Scott to the show 01:25 – SigOpt is optimization as a service 01:27 – SigOpt helps companies build different, complex AI and machine learning pipelines 01:41 – SigOpt is a SaaS model and the subscription is based on the number of models per month 01:53 – Pricing starts at $2500 a month and enterprise starts at $10K a month 02:13 – Average monthly RPU 02:33 – SigOpt usually engages at the executive level 02:38 – People wanted to use AI for their businesses but couldn’t find the right person to do the work so they go with SigOpt 03:23 – One of SigOpt’s client is Prudential 03:31 – Insurance companies are augmenting their traditional methods to the new data that is being collected 03:48 – As their data increases, the need for the best possible performance increases 04:26 – What SigOpt does is different from the traditional machine learning as a service companies 04:41 – Scott shares a specific example of how SigOpt works with credit card companies 04:44 – Fraud detection has been around for decades 05:28 – SigOpt fine tunes different knobs and levers in the configuration parameters that makes the machine model work 06:15 – SigOpt focuses on black box optimization 07:45 – SigOpt relies on the domain expertise of the person at the specific firm to build a deep learning model 08:31 – SigOpt applies an ensemble of global optimization techniques to the problem so they can efficiently configure the system 09:20 – SigOpt suggests different curvatures 09:58 – SigOpt has raised $8.8M to date 10:30 – SigOpt never sees the underlying data 11:11 – The entire system is designed to be hands-off 11:43 – SigOpt was launched end of 2013 11:51 – Number of paying customers is around a dozen 12:18 – Average MRR 12:25 – SigOpt prefer annual deals 12:54 – No churn yet 13:14 – Team size is 13 13:35 – The capital raised was spent on the team and the enterprise sales efforts 13:54 – 3-4 of the team are in sales 14:05 – CAC 14:22 – They sometimes visit their customers 14:55 – Investors like to make big bets on the new technologies 15:33 – The goal for the series A money 16:33 – Average expenses 17:40 – The Famous Five 3 Key Points: The need for AI and machine-learning is growing fast and there’s not enough people who are qualified to develop these products. The headcount can eat up most of a company’s expenses—especially in the technology industry. Optimization services make a business more efficient leading to a less to none churn rate. Resources Mentioned: The Top Inbox – The site Nathan uses to schedule emails to be sent later, set reminders in inbox, track opens, and follow-up with email sequences Klipfolio – Track your business performance across all departments for FREE Hotjar – Nathan uses Hotjar to track what you’re doing on this site. He gets a video of each user visit like where they clicked and scrolled to make the site a better experience Acuity Scheduling – Nathan uses Acuity to schedule his podcast interviews and appointments Host Gator – The site Nathan uses to buy his domain names and hosting for the cheapest price possible Audible – Nathan uses Audible when he’s driving from Austin to San Antonio (1.5-hour drive) to listen to audio books Show Notes provided by Mallard Creatives
What does it mean to tune an algorithm, how does it matter in a business context, and what are the approaches being developed today when it comes to tuning algorithms? This week's guest helps us answer these questions and more. CEO and Co-Founder Scott Clark of SigOpt takes time to explain the dynamics of tuning, goes into some of the cutting-edge methods for getting tuning done, and shares advice on how businesses using machine learning algorithms can continue to refine and adjust their parameters in order to glean greater results.
Talk Python To Me - Python conversations for passionate developers
See the full show notes for this episode on the website at talkpython.fm/51.
In this week's episode, we meet with Micheal McCourt, the head of engineering at SigOpt. He is an industry expert on optimization algorithms, so expect to learn about constraint-active search, SigOpt's new open-source optimizer, and how to run an engineering team.SponsorsChuck's Resume TemplateDeveloper Book Club startingBecome a Top 1% Dev with a Top End Devs MembershipLinksMichael McCourt | SigOptLinkedIn: Michael McCourtAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy