Podcasts about Clustering

  • 250PODCASTS
  • 398EPISODES
  • 33mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 30, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Clustering

Latest podcast episodes about Clustering

Machine Learning Guide
MLG 036 Autoencoders

Machine Learning Guide

Play Episode Listen Later May 30, 2025 65:55


Auto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, interpretability, and synthetic data generation. Links Notes and resources at ocdevel.com/mlg/36 Try a walking desk - stay healthy & sharp while you learn & code Build the future of multi-agent software with AGNTCY. Thanks to T.J. Wilder from intrep.io for recording this episode! Fundamentals of Autoencoders Autoencoders are neural networks designed to reconstruct their input data by passing data through a compressed intermediate representation called a “code.” The architecture typically follows an hourglass shape: a wide input and output separated by a narrower bottleneck layer that enforces information compression. The encoder compresses input data into the code, while the decoder reconstructs the original input from this code. Comparison with Supervised Learning Unlike traditional supervised learning, where the output differs from the input (e.g., image classification), autoencoders use the same vector for both input and output. Use Cases: Dimensionality Reduction and Representation Autoencoders perform dimensionality reduction by learning compressed forms of high-dimensional data, making it easier to visualize and process data with many features. The compressed code can be used for clustering, visualization in 2D or 3D graphs, and input into subsequent machine learning models, saving computational resources and improving scalability. Feature Learning and Embeddings Autoencoders enable feature learning by extracting abstract representations from the input data, similar in concept to learned embeddings in large language models (LLMs). While effective for many data types, autoencoder-based encodings are less suited for variable-length text compared to LLM embeddings. Data Search, Clustering, and Compression By reducing dimensionality, autoencoders facilitate vector searches, efficient clustering, and similarity retrieval. The compressed codes enable lossy compression analogous to audio codecs like MP3, with the difference that autoencoders lack domain-specific optimizations for preserving perceptually important data. Reconstruction Fidelity and Loss Types Loss functions in autoencoders are defined to compare reconstructed outputs to original inputs, often using different loss types depending on input variable types (e.g., Boolean vs. continuous). Compression via autoencoders is typically lossy, meaning some information from the input is lost during reconstruction, and the areas of information lost may not be easily controlled. Outlier Detection and Noise Reduction Since reconstruction errors tend to move data toward the mean, autoencoders can be used to reduce noise and identify data outliers. Large reconstruction errors can signal atypical or outlier samples in the dataset. Denoising Autoencoders Denoising autoencoders are trained to reconstruct clean data from noisy inputs, making them valuable for applications in image and audio de-noising as well as signal smoothing. Iterative denoising as a principle forms the basis for diffusion models, where repeated application of a denoising autoencoder can gradually turn random noise into structured output. Data Imputation Autoencoders can aid in data imputation by filling in missing values: training on complete records and reconstructing missing entries for incomplete records using learned code representations. This approach leverages the model's propensity to output ‘plausible' values learned from overall data structure. Cryptographic Analogy The separation of encoding and decoding can draw parallels to encryption and decryption, though autoencoders are not intended or suitable for secure communication due to their inherent lossiness. Advanced Architectures: Sparse and Overcomplete Autoencoders Sparse autoencoders use constraints to encourage code representations with only a few active values, increasing interpretability and explainability. Overcomplete autoencoders have a code size larger than the input, often in applications that require extraction of distinct, interpretable features from complex model states. Interpretability and Research Example Research such as Anthropic's “Towards Monosemanticity” applies sparse autoencoders to the internal activations of language models to identify interpretable features correlated with concrete linguistic or semantic concepts. These models can be used to monitor and potentially control model behaviors (e.g., detecting specific language usage or enforcing safety constraints) by manipulating feature activations. Variational Autoencoders (VAEs) VAEs extend autoencoder architecture by encoding inputs as distributions (means and standard deviations) instead of point values, enforcing a continuous, normalized code space. Decoding from sampled points within this space enables synthetic data generation, as any point near the center of the code space corresponds to plausible data according to the model. VAEs for Synthetic Data and Rare Event Amplification VAEs are powerful in domains with sparse data or rare events (e.g., healthcare), allowing generation of synthetic samples representing underrepresented cases. They can increase model performance by augmenting datasets without requiring changes to existing model pipelines. Conditional Generative Techniques Conditional autoencoders extend VAEs by allowing controlled generation based on specified conditions (e.g., generating a house with a pool), through additional decoder inputs and conditional loss terms. Practical Considerations and Limitations Training autoencoders and their variants requires computational resources, and their stochastic training can produce differing code representations across runs. Lossy reconstruction, lack of domain-specific optimizations, and limited code interpretability restrict some use cases, particularly where exact data preservation or meaningful decompositions are required.

NeurologyLive Mind Moments
141: Refining TSC Care: Phenotyping, Clustering, and Clinical Impact

NeurologyLive Mind Moments

Play Episode Listen Later May 16, 2025 22:07


Welcome to the NeurologyLive® Mind Moments® podcast. Tune in to hear leaders in neurology sound off on topics that impact your clinical practice. In this episode, "Refining TSC Care: Phenotyping, Clustering, and Clinical Impact," Ajay Gupta, MD, director of the Tuberous Sclerosis Program at Cleveland Clinic, discusses recently published research that used unsupervised clustering to group over 900 patients with tuberous sclerosis complex (TSC) into four clinically meaningful phenotypic clusters. He outlines the distinct traits of each cluster—ranging from tumor risk to cognitive impairment—and explains how variant-specific genetic data helped reinforce these categories. Gupta, who also serves as a professor of neurology at the Cleveland Clinic Lerner School of Medicine, also explores the clinical value of these findings for surveillance planning, early intervention, and future therapeutic trials. He emphasizes that while overlap between clusters exists, this approach lays essential groundwork for precision care and more targeted research in TSC. Looking for more epilpesy discussion? Check out the NeurologyLive® Epilepsy clinical focus page. Episode Breakdown: 1:00 – Study goals and the shift from genotype-to-phenotype toward phenotype-to-genotype mapping 2:40– Overview of the 4 main phenotypic clusters identified in the TSC population 8:05 – Genetic domain associations and their impact on clinical monitoring and treatment 11:50 – Neurology News Minute 14:45 – Understanding overlap between clusters and avoiding overprediction in clinical settings 17:00 – Implications for future surveillance strategies and precision candidate selection in TSC trials The stories featured in this week's Neurology News Minute, which will give you quick updates on the following developments in neurology, are further detailed here: FDA AdComm Plans to Review Investigational Cell Therapy Deramiocel for DMD Cardiomyopathy Gene Therapy AAV-GAD Gains Regenerative Medicine Advanced Therapy Designation as Potential Parkinson Treatment Microbiome-Targeting Therapy MaaT033 Continues to Show Promise in Final Phase 1 Readout Thanks for listening to the NeurologyLive® Mind Moments® podcast. To support the show, be sure to rate, review, and subscribe wherever you listen to podcasts. For more neurology news and expert-driven content, visit neurologylive.com.

SEO Is Not That Hard
Best of : Move your content to the next level with Keyword Clustering

SEO Is Not That Hard

Play Episode Listen Later May 5, 2025 14:10 Transcription Available


Send us a textWe explore how to revolutionize your content strategy with keyword clustering, a powerful technique for grouping related keywords to target them on a single page rather than creating multiple competing pages.• Keyword clustering helps prevent content cannibalization while creating more topically relevant pages• Traditional clustering methods using AI like ChatGPT often create imprecise clusters with limitations• Our data-driven approach analyzes Google's own search results to identify true keyword relationships• The method works by finding which URLs rank for multiple related keywords and creating clusters based on these connections• We've launched a new tool at KeywordsPeopleUse.com that automates this process for any language and location• You can try clustering up to 500 keywords for free to see how your target topics naturally group together• Adjust clustering parameters to create tighter or looser keyword groupings based on your content needs• This is the first step toward building complete topical maps for comprehensive site authorityTry it today for free at keywordspeopleuse.com. If you want to get in touch you can email me at podcast@keywordspeopleuse.com.SEO Is Not That Hard is hosted by Edd Dawson and brought to you by KeywordsPeopleUse.com Help feed the algorithm and leave a review at ratethispodcast.com/seo You can get your free copy of my 101 Quick SEO Tips at: https://seotips.edddawson.com/101-quick-seo-tipsTo get a personal no-obligation demo of how KeywordsPeopleUse could help you boost your SEO and get a 7 day FREE trial of our Standard Plan book a demo with me nowSee Edd's personal site at edddawson.comAsk me a question and get on the show Click here to record a questionFind Edd on Linkedin, Bluesky & TwitterFind KeywordsPeopleUse on Twitter @kwds_ppl_use"Werq" Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/

Beyond UX Design
Cognition Catalog: Clustering Illusion

Beyond UX Design

Play Episode Listen Later Feb 19, 2025 9:49


Understanding the Clustering Illusion: Decision-making pitfalls and How to Avoid ThemWe tend to perceive patterns in random sequences of data or events, even when there's no actual correlation or causal relationship present. This bias reflects our brain's tendency to seek order in randomness.What's the opportunity cost of seeing patterns in random data?Join us for this week's edition of the Cognition Catalog as we explore its impact on our subconscious decisions. Learn how these hidden biases affect team dynamics, workplace decisions, and daily interactions. Discover practical steps to identify and mitigate these biases to create a fair and cohesive work environment.To explore more about the social comparison bias, don't miss the full article on the Cognition Catalog!Don't forget to subscribe to the newsletter to be the first to know when new episodes drop!—Thanks for listening! We hope you dug today's episode. If you liked what you heard, be sure to like and subscribe wherever you listen to podcasts! And if you really enjoyed today's episode, why don't you leave a five-star review? Or tell some friends! It will help us out a ton.If you haven't already, sign up for our email list. We won't spam you. Pinky swear. Get a FREE audiobook AND support the show⁠ ⁠Support the show on Patreon⁠ ⁠Check out show transcripts⁠ ⁠Check out our website⁠ ⁠Subscribe on Apple Podcasts⁠ ⁠Subscribe on Spotify ⁠Subscribe on YouTube ⁠⁠Subscribe on Stitcher

Scouting Australia Podcast
Is Investing in One Location Financial Genius or Suicide?

Scouting Australia Podcast

Play Episode Listen Later Feb 16, 2025 46:18


In this week's episode, Sam Gordon and James Ibrahim are joined in the studio by senior Buyers Agent Jason Titus to discuss whether it's a smart move to buy all your investment properties in one location? The boys break down the pros and cons of investing in a single area and how it can impact the growth and sustainability of your portfolio. They dive into key factors like diversification, market cycles, and risk management, as well as the power of clustering and strategic decision-making. Plus, they discuss how to navigate different market phases, and the importance of education in making informed investment choices. If you're serious about building long-term wealth through property, this one's a must-listen!

Talkin‘ Politics & Religion Without Killin‘ Each Other
California's Path to Independence? A Conversation about CalExit with Marcus Ruiz Evans

Talkin‘ Politics & Religion Without Killin‘ Each Other

Play Episode Listen Later Feb 11, 2025 66:36


In this episode, host Corey Nathan engages in a timely and candid discussion with Marcus Ruiz Evans, the leader of the CalExit movement. Marc has been spearheading efforts to make California an independent nation for over a decade, publishing California's Next Century 2.0 in 2012. With the California Secretary of State recently approving a petition to start collecting signatures for an independence initiative, this conversation is more relevant than ever. Marc provides historical context, legal perspectives, and the strategic steps required to potentially break away from the United States.

Talkin‘ Politics & Religion Without Killin‘ Each Other
California's Path to Independence? A Conversation about CalExit with Marcus Ruiz Evans

Talkin‘ Politics & Religion Without Killin‘ Each Other

Play Episode Listen Later Feb 11, 2025 66:36


In this episode, host Corey Nathan engages in a timely and candid discussion with Marcus Ruiz Evans, the leader of the CalExit movement. Marc has been spearheading efforts to make California an independent nation for over a decade, publishing California's Next Century 2.0 in 2012. With the California Secretary of State recently approving a petition to start collecting signatures for an independence initiative, this conversation is more relevant than ever. Marc provides historical context, legal perspectives, and the strategic steps required to potentially break away from the United States.

Straight A Nursing
#384: MMM - Clustering Like a Pro!

Straight A Nursing

Play Episode Listen Later Jan 20, 2025 4:56


Let's start your week strong with a quick tip you can incorporate right away. In this Mo's Monday Minute shortie episode, I'm sharing my pro tips for clustering your care so you can manage your time better and boost your efficiency at the bedside! ___________________ FREE CLASS - If all you've heard are nursing school horror stories, then you need this class! Join me in this on-demand session where I dispel all those nursing school myths and show you that YES...you can thrive in nursing school without it taking over your life! 20 Secrets of Successful Nursing Students – Learn key strategies that will help you be a successful nursing student with this FREE guide! All Straight A Nursing Resources - Check out everything Straight A Nursing has to offer, including free resources and online courses to help you succeed!

Path To Citus Con, for developers who love Postgres
How I got started as a developer & in Postgres with Daniel Gustafsson

Path To Citus Con, for developers who love Postgres

Play Episode Listen Later Jan 17, 2025 82:31


March 5th 2005 at 3 PM in Copenhagen. That's the exact time and place Daniel Gustafsson's career took an unexpected turn, pivoting from operating systems to databases. At LinuxForum that day, Daniel had planned to meet up with the FreeBSD community, but a chance session about Postgres by Bruce Momjian completely blew his mind. By the time Daniel was on the train back to Malmö, he was already compiling Postgres. In this episode of Talking Postgres with Claire Giordano, Postgres major contributor and committer Daniel Gustafsson of Microsoft walks us through how he got his start as a developer and in Postgres—starting with his earliest computing memories of a hulking steel box in his family's living room in Sweden. Also part of Daniel's story: guitar tuning software. And curl!Links mentioned in this episode:Wikipedia: ABC 80Wikipedia: mSQLWikipedia: PCBoard BBS (bulletin board system) applicationConference back in 2010: CHAR(10) – Clustering, HA and Replication ConferenceWikipedia: IRIX operating systemInternet Archive Wayback Machine link: LinuxForum Conference Agenda from March 5, 2005 with Bruce Momjian's 3:00pm talk about Postgres Podcast: Solving every data problem in SQL with Dimitri Fontaine & Vik FearingConference: Nordic PGDay 2025 to happen Mar 18th in CopenhagenConference: All Things Open 2025 to happen Oct 12-14 in Raleigh NCConference: PGConf.dev 2025 to happen May 13-16 in Montreal, CanadaCFP: POSETTE: An Event for Postgres 2025 CFP open until Feb 9 2025 (it's a virtual event)Slides from PGConfEU 2024 Talk: What's in a Postgres major release? An analysis of contributions in v17 timeframeVideo of PGConf EU 2024 Talk: Analysis of contributions in the v17 timeframe, by Claire GiordanoBook recommendation: The Dragon Book, a.k.a. Compilers: Principles, Techniques, and ToolsBook recommendation: The Purple Book (or, Wizard Book), a.k.a. Structure and Interpretation of Computer Programs (SICP)Book recommendation: The Practice of Programming by Kernighan & PikeCalendar invite: LIVE recording of Ep24 of Talking Postgres podcast to happen on Wed Feb 05, 2025 with guest Robert Haas

Oracle University Podcast
Oracle AI Vector Search: Part 2

Oracle University Podcast

Play Episode Listen Later Oct 29, 2024 12:57


This week, Lois Houston and Nikita Abraham continue their exploration of Oracle AI Vector Search with a deep dive into vector indexes and memory considerations.   Senior Principal APEX and Apps Dev Instructor Brent Dayley breaks down what vector indexes are, how they enhance the efficiency of search queries, and the different types supported by Oracle AI Vector Search.   Oracle Database 23ai: Oracle AI Vector Search Fundamentals: https://mylearn.oracle.com/ou/course/oracle-database-23ai-oracle-ai-vector-search-fundamentals/140188/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   Twitter: https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!   00:26 Nikita: Welcome back to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services at Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week was Part 1 of our discussion on Oracle AI Vector Search. We talked about what it is, its benefits, the new vector data type, vector embedding models, and the overall workflow. In Part 2, we're going to focus on vector indices and memory. 00:56 Nikita: And to help us break it all down, we've got Brent Dayley back with us. Brent is a Senior Principal APEX and Apps Dev Instructor with Oracle University. Hi Brent! Thanks for being with us today. So, let's jump right in! What are vector indexes and how are they useful? Brent: Now, vector indexes are specialized indexing data structures that can make your queries more efficient against your vectors. They use techniques such as clustering, and partitioning, and neighbor graphs. Now, they greatly reduce the search space, which means that your queries happen quicker. They're also extremely efficient. They do require that you enable the vector pool in the SGA. 01:42 Lois: Brent, walk us through the different types of vector indices that are supported by Oracle AI Vector Search. How do they integrate into the overall process? Brent: So Oracle AI Vector Search supports two types of indexes, in-memory neighbor graph vector index. HNSW is the only type of in-memory neighbor graph vector index that is supported. These are very efficient indexes for vector approximate similarity search. HNSW graphs are structured using principles from small world networks along with layered hierarchical organization. And neighbor partition vector index, inverted file flat index, is the only type of neighbor partition index supported. It is a partition-based index which balances high search quality with reasonable speed. 02:35 Nikita: Brent, you mentioned that enabling the vector pool in the SGA is a requirement when working with vector indexes. Can you explain that process for us? Brent: In order for you to be able to use vector indexes, you do need to enable the vector pool area. And in order to do that, what you need to do is set the vector memory size parameter. You can set it at the container database level. And the PDB inherits it from the CDB. Now bear in mind that the database does have to be balanced when you set the vector pool. 03:12 Lois: Ok. Are there any other considerations to keep in mind when using vector indices? Brent: Vector indexes are stored in this pool, and vector metadata is also stored here. And you do need to restart the database. So large vector indexes do need lots of RAM, and RAM constrains the vector index size. You should use IVF indexes when there is not enough RAM. IVF indexes use both the buffer cache as well as disk. 03:42 Nikita: And what about memory considerations? Brent: So to remind you, a vector is a numerical representation of text, images, audio, or video that encodes the features or semantic meaning of the data, instead of the actual contents, such as the words or pixels of an image. So the vector is a list of numerical values known as dimensions with a specified format. Now, Oracle does support the int8 format, the float32 format, and the float64 format. Depending on the format depends on the number of bytes. For instance, int8 is one byte, float32 is four bytes. Now, Oracle AI Vector Search supports vectors with up to 65,535 dimensions. 04:34 Lois: What should we know about creating a table with a vector column? Brent: Now, Oracle Database 23ai does have a new vector data type. The new data type was created in order to support vector search. The definition can include the number of dimensions and can include the format. Bear in mind that either one of those are optional when you define your column. The possible dimension formats are int, float 32, and float 64. Float 32 and float 64 are IEEE standards, and Oracle Database will automatically cast the value if needed. 05:18 Nikita: Can you give us a few declaration examples? Brent: Now, if we just do a vector type, then the vectors can have any arbitrary number of dimensions and formats. If we describe the vector type as vector * , *, then that means that vectors can have an arbitrary number of dimensions and formats. Vector and vector * , * are equivalent. Vector with the number of dimensions specified, followed by a comma, and then an asterisk, is equivalent to vector number of dimensions. Vectors must all have the specified number of dimensions, or an error will be thrown. Every vector will have its dimension stored without format modification. And if we do vector asterisk common dimension element format, what that means is that vectors can have an arbitrary number of dimensions, but their format will be up-converted or down-converted to the specified dimension element format, either INT8, float 32, or float 64. 06:25 Working towards an Oracle Certification this year? Take advantage of the Certification Prep live events in the Oracle University Learning Community. Get tips from OU experts and hear from others who have already taken their certifications. Once you're certified, you'll gain access to an exclusive forum for Oracle-certified users. What are you waiting for? Visit mylearn.oracle.com to get started.   06:52 Nikita: Welcome back! Brent, what is the vector constructor and why is it useful? Brent: Now, the vector constructor is a function that allows us to create vectors without having to store those in a column in a table. These are useful for learning purposes. You use these usually with a smaller number of dimensions. Bear in mind that most embedding models can contain thousands of different dimensions. You get to specify the vector values, and they usually represent two-dimensional like xy coordinates. The dimensions are optional, and the format is optional as well. 07:29 Lois: Right. Before we wrap up, can you tell us how to calculate vector distances? Brent: Now, vector distance uses the function VECTOR_DISTANCE as the main function. This allows you to calculate distances between two vectors and, therefore, takes two vectors as parameters. Optionally, you can specify a metric. If you do not specify a metric, then the default metric, COSINE, would be used. You can optionally use other shorthand functions, too. These include L1 distance, L2 distance, cosine distance, and inner product. All of these functions also take two vectors as input and return the distance between them. Now the VECTOR_DISTANCE function can be used to perform a similarity search. If a similarity search query does not specify a distance metric, then the default cosine metric will be used for both exact and approximate searches. If a similarity search does specify a distance metric in the VECTOR_DISTANCE function, then an exact search with that distance metric is used if it conflicts with the distance metric specified in a vector index. If the two distance metrics are the same, then this will be used for both exact as well as approximate searches. 08:58 Nikita: I was wondering Brent, what vector distance metrics do we have access to? Brent: We have Euclidean and Euclidean squared distances. We have cosine similarity, dot product similarity, Manhattan distance, and Hamming similarity. Let's take a closer look at the first of these metrics, Euclidean and Euclidean squared distances. This gives us the straight-line distance between two vectors. It does use the Pythagorean theorem. It is sensitive to both the vector size as well as the direction. With Euclidean distances, comparing squared distances is equivalent to comparing distances. So when ordering is more important than the distance values themselves, the squared Euclidean distance is very useful as it is faster to calculate than the Euclidean distance, which avoids the square root calculation. 09:58 Lois: And the cosine similarity metrics? Brent: It is one of the most widely used similarity metrics, especially in natural language processing. The smaller the angle means they are more similar. While cosine distance measures how different two vectors are, cosine similarity measures how similar two vectors are. Dot product similarity allows us to multiply the size of each vector by the cosine of their angle. The corresponding geometrical interpretation of this definition is equivalent to multiplying the size of one of the vectors by the size of the projection of the second vector onto the first one or vice versa. Larger means that they are more similar. Smaller means that they are less similar. Manhattan distance is useful for describing uniform grids. You can imagine yourself walking from point A to point B in a city such as Manhattan. Now, since there are buildings in the way, maybe we need to walk down one street and then turn and walk down the next street in order to get to our result. As you can imagine, this metric is most useful for vectors describing objects on a uniform grid such as city blocks, power grids, or perhaps a chessboard. 11:27 Nikita: And finally, we have Hamming similarity, right? Brent: This describes where vector dimensions differ. They are binary vectors, and it tells us the number of bits that require change to match. It compares the position of each bit in the sequence. Now, these are usually used in order to detect network errors. 11:53 Nikita: Brent, thanks for joining us these last two weeks and explaining what Oracle AI Vector Search is. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai: Oracle AI Vector Search Fundamentals course.   Lois: This concludes our season on Oracle Database 23ai New Features for administrators. In our next episode, we're going to talk about database backup and recovery, but more on that later! Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 12:29 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Weekend Warrior with Dr. Robert Klapper

Clustering everyone in Blue, the story of Dodgertown.

clustering dodgertown
Over The Edge
AI's Role in Sustainable Building Management with Jean-Simon Venne, CTO and Co-Founder at BrainBox AI

Over The Edge

Play Episode Listen Later Jul 24, 2024 41:45


AI is revolutionizing building energy management. In this episode, Bill sits down with Jean-Simon Venne, Co-Founder and CTO at BrainBox AI at BrainBox AI about their cutting-edge AI solutions for energy efficiency. They dive into current AI challenges, the critical need for defining AI's purpose, and the impact of predictive and preemptive control. Additionally, they discuss how to balance AI power consumption with efficiency gains.Key Quotes:“The bottleneck is now on your capacity to find the right mix of technology to assemble a new solution.”"You could very rapidly deploy AI in thousands and thousands of buildings, without any bottleneck, and get the first layer of 20, 25 percent energy reduction.”“I think where the future is going in terms of optimizing is not only the building at the building level but optimizing the behavior of the building. So the grid could be optimized.”--------Timestamps: (01:48) Jean-Simon's career journey(04:22) Current bottlenecks in AI(07:07) Bias in AI models(13:22) Understanding the complexities of building operations(20:44) Factors influencing AI predictions(25:20) Energy consumption in buildings(28:12) Clustering buildings for grid optimization(35:00) Developing specialized LLMs for building management--------Sponsor:Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we're here to help you simplify your edge so you can generate more value. Learn more by visiting dell.com/edge for more information or click on the link in the show notes.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Bill on LinkedInFollow Jean-Simon on LinkedInEdge Solutions | Dell Technologies

Audio News
CHECK POINT ANUNCIA DEEPBRAND CLUSTERING

Audio News

Play Episode Listen Later Jul 17, 2024 5:31


Para fortalecer la lucha contra el phishing, Check Point Software ha lanzado DeepBrand Clustering, una evolución de Brand Spoofing Prevention, que ya protege a más de 210 clientes en más de 190 países. Esta innovadora solución promete mejorar la detección y prevención de ataques, protegiendo a los usuarios y organizaciones de las crecientes amenazas cibernéticas.

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists
58 – Databricks Announcements (Open Source Unity Catalog, Liquid Clustering, Nvidia)

The Datanation Podcast - Podcast for Data Engineers, Analysts and Scientists

Play Episode Listen Later Jun 12, 2024


Alex Merced discusses some of the Databricks announcement at the Data + AI summit Follow Alex by visit https://bio.alexmerced.com/data

Two Bees in a Podcast
Episode 167: Honey Bee Clustering

Two Bees in a Podcast

Play Episode Listen Later Jun 4, 2024 47:07


In this episode of Two Bees in a Podcast, released on June 4, 2024, Dr. Jamie Ellis and Amy Vu welcome back Derek Mitchell from the University of Leeds Institute of Thermofluids to talk about his research article, “Honeybee cluster—not insulation but stressful heat sink.” This episode concludes with a Q&A segment. Check out our website: ufhoneybee.com, for additional resources from today's episode. 

East Meets West Hunt
Ep. 366: Searching for THAT Buck on Trail Camera with Joe Martonik and Justin Mueller

East Meets West Hunt

Play Episode Listen Later May 21, 2024 79:38


Beau Martonik is joined by Joe Martonik and Justin Mueller after a day of spring scouting and filming for an upcoming Mountain Buck Scouting Video Series. They recap the day of trying to find a buck Beau has on camera, working backwards from a thermal hub community scrape, analyzing buck bedding, the holy grail community scrape, tying a spot together, looking for the next world-class buck, learning to be efficient with your time, and much more! Topics: 00:00:00 - Justin retires the Impala, Beau's Tundra Update 00:11:56 - Mountain Buck Scouting Video Series is back! 00:14:05 - I got a buck on camera, what do I do next? 00:19:25 - How the area lays out 00:23:05 - Working outward from the community scrape in the thermal hub 00:32:50 - Finding an early-season buck bed 00:34:57 - Finding the big woods food sources 00:37:15 - The holy grail community scrape 00:46:05 - Tying the spot together finding a big bed near the scrape 00:55:25 - Clustering trail cameras 00:56:45 - Looking for the next mystical, world-class buck 01:01:43 - Learning to be efficient 01:05:05 - Joe's next 170”? 01:08:35 - Avoiding deer season burnout Note** Timestamps will have roughly 4 minutes added to them depending on ad length. Resources: Instagram:   @eastmeetswesthunt @beau.martonik @justinmuellerphotography Facebook:   East Meets West Outdoors  Website/Apparel/Deals: https://www.eastmeetswesthunt.com/ YouTube: Beau Martonik - https://www.youtube.com/channel/UCQJon93sYfu9HUMKpCMps3w Partner Discounts and Affiliate Links: https://www.eastmeetswesthunt.com/partners Amazon Influencer Page https://www.amazon.com/shop/beau.martonik Learn more about your ad choices. Visit megaphone.fm/adchoices

SEO Is Not That Hard
Why you should be Keyword Clustering

SEO Is Not That Hard

Play Episode Listen Later Apr 1, 2024 16:11 Transcription Available


Try our Keyword Clustering tool at : https://keywordspeopleuse.com/keyword-clustering-toolSEO Is Not That Hard is hosted by Edd Dawson and brought to you by KeywordsPeopleUse.comYou can get your free copy of my 101 Quick SEO Tips at: https://seotips.edddawson.com/101-quick-seo-tipsTo get a personal no-obligation demo of how KeywordsPeopleUse could help you boost your SEO then book an appointment with me nowAsk me a question and get on the show Click here to record a questionFind Edd on Twitter @channel5Find KeywordsPeopleUse on Twitter @kwds_ppl_use"Werq" Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/

SEO Is Not That Hard
Breaking News - New Keyword Clustering launched

SEO Is Not That Hard

Play Episode Listen Later Mar 26, 2024 11:29 Transcription Available


Link to watch the video : https://www.youtube.com/watch?v=i-S5gbV2nLILink to try out Keyword Clustering : https://keywordspeopleuse.com/keyword-clustering-toolSEO Is Not That Hard is hosted by Edd Dawson and brought to you by KeywordsPeopleUse.comYou can get your free copy of my 101 Quick SEO Tips at: https://seotips.edddawson.com/101-quick-seo-tipsTo get a personal no-obligation demo of how KeywordsPeopleUse could help you boost your SEO then book an appointment with me nowAsk me a question and get on the show Click here to record a questionFind Edd on Twitter @channel5Find KeywordsPeopleUse on Twitter @kwds_ppl_use"Werq" Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/

Misreading Chat
#128: Faiss: A library for efficient similarity search and clustering of dense vectors.

Misreading Chat

Play Episode Listen Later Mar 9, 2024 42:38


Meta の vector search 実装の関連論文を向井が読みました。ご意見感想などは Reddit やおたより投書箱にお寄せください。iTunes のレビューや星もよろしくね。

Quantitude
S5E17 Classification and Regression Trees with Yi Feng

Quantitude

Play Episode Listen Later Feb 27, 2024 50:46


In this week's episode Greg and Patrick are honored to visit with Yi Feng, a quantitative methodologist at UCLA, as she helps them understand classification and regression tree analysis. She describes the various ways in which these models can be used, and how these can serve to inform both prediction and explanation. Along the way they also discuss looking pensive, drunken 3-way interactions, Stephen Hawking, parlor tricks, Cartman, validation, dragon boats, anxiety, spam filters, hair loss, audio visualizations, overused tree analogies, rainbows & unicorns, rain in Los Angeles, and Moneyball.Stay in contact with Quantitude! Twitter: @quantitudepod Web page: quantitudepod.org Merch: redbubble.com

SEO Is Not That Hard
Move your content to the next level with Keyword Clustering

SEO Is Not That Hard

Play Episode Listen Later Feb 19, 2024 13:01 Transcription Available


Unlock the full potential of your website's content with a deep dive into the world of Keyword Clustering, as I, Ed Dawson, share my two decades of SEO wisdom. You'll come away with a fresh perspective on how to craft content that hits the sweet spot with Google, all while avoiding the SEO faux pas of cannibalizing your own efforts. Throughout the episode, we dissect the intricate process behind creating powerful keyword clusters, a method that has propelled KeyWordPeopleUse.com to the forefront of the industry. We'll explore the innovative techniques and cutting-edge software that have revolutionized how we group semantically related keywords, ensuring every piece of content you produce is a finely tuned magnet for search engine success. Tune in and transform your approach to SEO with insights and strategies that only come with years of experience in the trenches of site monetization.SEO Is Not That Hard is hosted by Edd Dawson and brought to you by KeywordsPeopleUse.comYou can get your free copy of my 101 Quick SEO Tips at: https://seotips.edddawson.com/101-quick-seo-tipsTo get a personal no-obligation demo of how KeywordsPeopleUse could help you boost your SEO then book an appointment with me nowAsk me a question and get on the show Click here to record a questionFind Edd on Twitter @channel5Find KeywordsPeopleUse on Twitter @kwds_ppl_use"Werq" Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/

YUTORAH: R' Moshe Taragin -- Recent Shiurim
A Sefat Emmet for Bo: Clustering to Geulah; A Doorway to the Future; A Fallen Generation But A Great Generation; Historical Carve-Outs

YUTORAH: R' Moshe Taragin -- Recent Shiurim

Play Episode Listen Later Jan 17, 2024 14:56


Cytokine Signalling Forum
AxSpA Podcast: Bimekizumab in AxSpA & Secukinumab Patient Clustering

Cytokine Signalling Forum

Play Episode Listen Later Dec 22, 2023 26:26


Join us for the latest axSpA podcast brought to you by the CSF! This month Dr Sofia Ramiro, consultant rheumatologist and senior researcher at Leiden University Medical Centre and Zuyderland Medical Centre, is joined once again by experts with a wealth of clinical knowledge. Joining her is Hideto Kameda, Professor of Internal Medicine at Toho University as well as Atul Deodhar, Professor of Medicine and Medical Director of Rheumatology Clinics in the Division of Arthritis & Rheumatic Diseases at Oregon Health & Science University in Portland, USA. Also joining this insightful group is Xenofon Baraliakos, Professor of Internal Medicine and Rheumatology at the Ruhr-University in Bochum, and Medical Director of the rheumatology centre Rheumazentrum Ruhrgebiet in Herne, Germany In the first paper discussed, the authors compared the efficacy and safety of bimekizumab with biologic/targeted synthetic disease-modifying antirheumatic drugs in nr-axSpA and AS. Our second paper then goes on to identify distinct clinical clusters based on patient demographics and baseline clinical indicators from the clinical development programme of secukinumab in patients with a variety of rheumatological conditions.

R3ciprocity Podcast
New Feature On R3ciprocity: Clustering Of Papers For Conferences And Classrooms

R3ciprocity Podcast

Play Episode Listen Later Dec 21, 2023 6:19


I talk about a new feature that we are building into the R3ciprocity platform that will hopefully aid people in their classrooms and as well as conference organizing.

Curiosity Daily
Steroid Psychopathy, Iceberg Crash, Stand Up Dizziness

Curiosity Daily

Play Episode Listen Later Nov 22, 2023 12:40


Today, you'll learn about the psychological toll of steroid use, a very slow moving penguin-iceberg collision, and why we sometimes get dizzy when we stand up. Steroid Psychopath “Male weightlifters who use steroids are more prone to psychopathology than those who do not.” by Vladimir Hedrih. 2023. “Clustering psychopathology in male anabolic-androgenic steroid users and nonusing weightlifters.” by Marie Lindvik Jorstad, et al. 2023. “Anabolic Steroids.” Cleveland Clinic. 2023. Iceberg Crash “45-mile-long iceberg slams into penguin refuge in Antarctica, almost causing ecological disaster.” by Harry Baker. 2023. “A Brief Iceberg-Island Encounter.” by Adam Voiland. 2023. “Chinstrap Penguin.” n.a. N.d. “Chinstrap Penguin.” National Geographic. N.d. Stand Up Dizziness “Why do you get dizzy if you stand up too fast?” by Anna Gora. 2023. “Orthostatic Hypotension.” NIH. 2023. “A Brief REview on the Pathological Role of Decreased Blood Flow Affected in Retinitis Pigmentosa.” by Yi Jing Yang. 2018. Follow Curiosity Daily on your favorite podcast app to get smarter with Calli and Nate — for free! Still curious? Get exclusive science shows, nature documentaries, and more real-life entertainment on discovery+! Go to https://discoveryplus.com/curiosity to start your 7-day free trial. discovery+ is currently only available for US subscribers. Hosted on Acast. See acast.com/privacy for more information.

Manufacturing an American Century
Innovation & Industry: Rebuilding America's Manufacturing Muscle w/Phillip Singerman Ph.D

Manufacturing an American Century

Play Episode Listen Later Nov 14, 2023 29:56


In this episode of Manufacturing an American Century, host Matt Bogoshian is joined by Phillip Singerman, Ph.D, former U.S. Assistant Secretary of Commerce for Economic Development, Associate Director for Innovation and Industry Services at the National Institute of Standards and Technology, and current AMCC Senior Advisor for Performance Measurement. The two discuss the historical perspective behind today's national industrial policy, and the current rising trend of regionalization and bottom-up leadership that's powering a national manufacturing resurgence.

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
AI in Forecasting with Jon Bennion, ML | AI Engineer of LLM Ops

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

Play Episode Listen Later Oct 25, 2023 26:21


In this episode, I talk with Jon Bennion, a seasoned ML | AI Engineer at LLM Ops, about the fascinating world of AI in Forecasting. Jon shares his expertise in rapid prototyping of AI/ML models and how they are deployed in real-world applications, emphasizing the importance of goal metric orientation for measuring ROI. Join us as we explore the LangChain-centric approach and delve into topics such as Deep Learning, Machine Learning, and Clustering in the context of Forecasting with AI. Investor Email: jaeden@aibox.ai Get on the AI Box Waitlist: ⁠⁠https://AIBox.ai/⁠⁠ Facebook Community: ⁠https://www.facebook.com/groups/739308654562189 Follow me on X: ⁠⁠https://twitter.com/jaeden_ai⁠⁠

Best Day Ever
Episode 119: The Difference Between Unity and Uniformity with Tasha Calvert

Best Day Ever

Play Episode Listen Later Sep 26, 2023 29:55


Mentioned in this episode: She Belongs Bible Study Ephesians 5:32 Nehemiah 4 Book, The Big Sort: Why the Clustering of Like-Minded America is Tearing Us Apart   Connect with Tasha Calvert: Website Instagram Digging In podcast with Tasha Prestonwood Baptist Church women's ministry Tasha's Bible studies on YouTube, Prestonwood Women's Ministry   Connect with Katy:  Website Instagram Facebook  

Centre for Cities
City Minutes: Where to find the melting pots of the new economy

Centre for Cities

Play Episode Listen Later Sep 11, 2023 17:36


Director of Policy and Research Paul Swinney is joined by James Evans, Research Fellow at Centre for Cities and author of the new report, Innovation hotspots: Clustering the New Economy, which identifies 344 hotspots of ‘new economy' activity where promising businesses are clustering together. Paul and James discuss the findings and why new economy hotspots are an urban phenomenon. This episode is part of Centre for Cities' City Minutes series. Please rate, review and share the episode if you enjoyed it.

The Joyous Podcast
The Impact of Generative AI on Enterprises and the Workforce with Marcus Weldon

The Joyous Podcast

Play Episode Listen Later Sep 5, 2023 45:53


Marcus Weldon (13th President of Bell Labs, ex-Nokia) talks to Mike Carden about the impact of generative AI, the opportunities and challenges of LLM, how AI will augment human productivity, not replace it - and the potential of AI to lead to a more equitable world. “Generative AI is not a portent of doom, it's the portent of equality and equanimity that the world has been waiting for.” About Marcus Former Corporate CTO of Alcatel-Lucent. 13th President of Bell Labs. Corporate Chief Technology Officer of Nokia. Global Telecoms Business Power 100 2014. Global Telecoms Business 50 CTOs to watch 2014. Awarded the New Jersey Medal for Science and Technology 2016. https://www.nokia.com/blog/author/marcus-weldon/ https://twitter.com/MarcusWeldon https://www.linkedin.com/in/marcus-weldon-1266497/ Key moments Introduction to Marcus Weldon. (0:27) Marcus's origin story. (1:44) What does the president of Bell Labs do? (5:08) The transition to an open market for innovation. (7:23) The challenges of enterprises having similar tasks. (13:52) The psychology of autonomous vehicles. (16:54) The future of VC and innovation. (21:18) Why you should jump on the bandwagon of Ai. (25:44) Clustering. (32:08) The opportunity to outperform in the current environment. (36:26)

#arthistoCast – der Podcast zur Digitalen Kunstgeschichte
Folge 4: Visuelles Flanieren – Mit Computer Vision in großen Bildmengen suchen

#arthistoCast – der Podcast zur Digitalen Kunstgeschichte

Play Episode Listen Later Jul 5, 2023 74:06


Im Zuge der Digitalisierung von Museums- und Archivbeständen sind wir in der Kunstgeschichte mit einer enormen Menge heterogener Bilddatenbanken konfrontiert. Aber wie können wir uns diese großen Bilddatenmengen erschließen? Was ist visuelles Suchen und wie funktioniert die Technik dahinter?In dieser Folge spricht Jacqueline Klusik-Eckert mit Prof. Dr. Peter Bell und Stefanie Schneider, M.Sc., über das visuelle Suchen in großen Bilddatenmengen. Dabei geht es neben einer Reflexion über unsere Suchstrategien in der Kunstgeschichte auch um Prototypen für das visuelle Suchen. Hierbei werden in experimentellen Anwendungen unterschiedliche Verfahren des Computersehens, Computer Vision, erprobt. Angefangen bei der Frage, ob es visuelles Suchen überhaupt schon gibt, werden unterschiedliche Suchverhalten und Routinen besprochen, wie man sich großen Datenmengen nähern kann. Dabei wird klar, dass das visuelle Suchen mittels Computer Vision Verfahren eher einem mäanderndem Flanieren ähnelt und hilft, über unsere menschlichen Wahrnehmungsgrenzen hinauszugehen. Welche Rolle diese Hilfsmittel bei der Erschließung von unkategorisierten Datenmengen spielen und wie man sie auch zur Inspiration für neue Forschungsideen nutzen kann, wird im gemeinsamen Gespräch erörtert.Dabei gewinnt man einen Einblick in die Technik hinter der Benutzeroberfläche. Denn oft ist nicht klar, was ein Algorithmus als “ähnlich” betrachtet oder warum gewisse Werke miteinander in eine Art Punktwolke, dem Skatterplot, gruppiert werden. Die beiden Experti*innen erklären die dahinterliegenden Verfahren und zeigen auch ihre Grenzen auf. Es wird klar, dass der Einsatz dieser digitalen Werkzeuge als Hilfsmittel auch immer mit einer Diskussion über facheigene etablierte Verfahren und Methoden des Recherchierens und Suchens einhergeht.Prof. Dr. Peter Bell ist Professor für Kunstgeschichte und Digital Humanities an der Philipps-Universität Marburg. In seiner Forschung beschäftigt er sich schon länger mit den Einsatzszenarien von Computer Vision für die Kunstgeschichte. In seiner Arbeitsgruppe wurde u.a. die Bildsuche imgs.ai von Fabian Offert entwickelt.Stefanie Schneider, M.Sc., ist Wissenschaftliche Assistentin für Digitale Kunstgeschichte an der Ludwigs-Maximilians-Universität München. Als Fachinformatikerin und ausgebildete Anwendungsentwicklerin hat sie schon einige Prototypen für die Digitale Kunstgeschichte entwickelt und spricht über das Projekt „iART – Ein interaktives Analyse- und Retrieval-Tool zur Unterstützung von bildorientierten Forschungsprozessen“Begleitmaterial zu den Folgen findest du auf der Homepage unter https://www.arthistoricum.net/themen/podcasts/arthistocastAlle Folgen des Podcasts werden bei heidICON mit Metadaten und persistentem Identifier gespeichert. Die Folgen haben die Creative-Commons-Lizenz CC BY 4.0 und können heruntergeladen werden. Du findest sie unterhttps://heidicon.ub.uni-heidelberg.de/#/detail/1738702Bei Fragen, Anregungen, Kritik und gerne auch Lob kannst du gerne per Mail an uns schicken unterpodcast@digitale-kunstgeschichte.de

RunAs Radio
High Availability in 2023 with Allan Hirt

RunAs Radio

Play Episode Listen Later Jun 14, 2023 36:40


What does high availability look like in 2023? Richard chats with Allan Hirt about his work with high-availability solutions today - not just on-premises but also in the cloud. Allan talks about the frustration folks had with moving workloads in the cloud during the pandemic panic, lift-and-shifting workloads focusing on getting things working quickly rather than cost-effectively. The results can be costly, to the point where some folks considering moving back off the cloud again - but does that make sense? Allan talks about creating high availability efficiently wherever you want to run your workloads!Links:Always On on SQL Server with Azure VMsSQL Server 2022Azure Regions and Availability ZonesOperations ManagerRecorded May 11, 2023

The Hospital Finance Podcast
Extensions and Clustering - Two New Features within ICD-11 Webinar

The Hospital Finance Podcast

Play Episode Listen Later Apr 26, 2023 12:56


In this episode, Kristen Eglintine, Coding Supervisor at BESLER gives us a glimpse into the upcoming webinar, Extensions & Clustering - Two New Features within ICD-11 that we're hosting on May 3, 2023, at 1 PM ET.

Machine Learning Podcast - Jay Shah
Combining knowledge of clinical medicine and Artificial Intelligence | Emma Rocheteau

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Mar 31, 2023 96:51


Emma is a final-year medical student at the University of Cambridge and also pursuing her Ph.D. in Machine Learning. With her knowledge of clinical decision-making, she is working on research projects that leverage machine-learning techniques to improve clinical workflow. She will be taking her role as an academic doctor post her graduation. Time stamps of the conversation00:00:00 Introduction00:02:08 From clinical science to learning AI00:13:15 Learning the basics of Artificial Intelligence00:20:12 Promise of AI in medicine00:30:13 Do we really need interpretable AI models for clinical decision-making? 00:38:47 Using AI for more clinically-useful problems00:50:55 Facilitating interdisciplinary efforts00:54:06 Predicting length of stay in ICUs using convolutional neural networks01:03:04 AI for improving clinical workflows and biomarker discovery   01:07:55 Clustering disease trajectories in mechanically ventilated patients using machine learning01:16:37 ChatGPT for medical research or clinical decision making01:25:21 Quality over quantity of AI works published nowadays01:31:07 Advice to researchersEmma's Homepage: https://emmarocheteau.com/LinkedIn: https://www.linkedin.com/in/emma-rocheteau-125384132/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary Series- Clustering, Cluster Analysis, K-Means, Gaussian Mixture Model

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Mar 24, 2023 11:35


The idea of grouping similar types of data together is the main idea behind clustering. Clustering supports the goals of Unsupervised Learning which is finding patterns in data without requiring labeled datasets. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Clustering, Cluster Analysis, K-Means, and Gaussian Mixture Model, and explain how they relate to AI and why it's important to know about them. Continue reading AI Today Podcast: AI Glossary Series- Clustering, Cluster Analysis, K-Means, Gaussian Mixture Model at AI & Data Today.

A New Low
Ep. 145: Clustering Your D-Pics in Retrograde

A New Low

Play Episode Listen Later Mar 23, 2023 70:21


El Greg looks for a laugh, Kyle looks to the stars, Joe looks through his phone and Scott looks at the slots.

Raw Data By P3
Like Throwing Water on Gremlins, w/ Fred Kaffenberger

Raw Data By P3

Play Episode Listen Later Mar 21, 2023 72:58


Get ready to tune in to the next episode of Raw Data by P3 Adaptive, where former P3 Adaptive superstar Fred Kaffenberger will be returning as a guest. Rob and Fred will be reminiscing about old times and geeking out over all things Microsoft. But the real kicker? Fred will be revealing a shocking confession: he once faced Rob's dreaded "Interview of Doom" having only known what V-LOOKUP was for 2 years! What happened next? Well, let's just say Fred didn't let that minor setback hold him back from becoming a data wizard. Before joining P3 Adaptive, Fred worked as a white paper writer, where he was told he didn't have the "voice of the customer." But Rob knew better, recognizing Fred's talent for helping people get the most out of technology and a near clone of his own voice, Rob knew Fred had the talent and the skill to excel at P3 Adaptive. Today, Fred is over at Oracle, where he's transitioned from writing about migrating from Oracle to Power BI to migrating from Power BI to Oracle. Talk about a change of pace! But true to form, Fred is still a tech wizard, constantly expanding his skillset and crushing it in the world of DAX. As for his introduction to the world of data, it began in sales and moved to data and debugging. Eventually, Fred moved on to database work where he learned one of the most important lessons anyone in the tech field can use: the value of being concise when communicating with developers. As he discovered, the more words you use, the more room there is for interpretation, and nobody wants that. As always, we hope you enjoyed this episode. Be sure to share your thoughts by leaving us a review on your favorite podcast platform. Also on this episode: Fred's Data Adventure blog 5 signs you have ADHD and autism by Yo Samdy Sam  William Dodson MD on the Interest-based Nervous System (ICNU)   Fatima the Spinner and the Tent by Idries Shah   Zork online Discrete mathematics Referential integrity Oracle Analytics Cloud Oracle Release 2--the first commercially available relational database to use SQL  Get Out Nope Excel Power Map Diplo Naomi Shehab Nai Pirates of Silicon Valley k-means Clustering

Tara Brabazon podcast
Maive 8 - Clustering the literature

Tara Brabazon podcast

Play Episode Listen Later Feb 13, 2023 14:14


Tara and Maive talk about literature reviews, and particularly how they can be constructed within the parameters of the artefact and exegesis PhD

Venture Stories
Talent Identification, Hierarchies, and Clustering with Rohit Krishnan

Venture Stories

Play Episode Listen Later Feb 7, 2023 40:27


Rohit Krishnan (@krishnanrohit), venture capitalist and author of the blog Strange Loop Cannon, joins Erik on this episode. Takeaways:- Many of the people at the top of their fields today say they would never get hired if they were just starting out today. Today's selection process at elite institutions has become more stringent but has dropped the interesting variance that exists at the top of the pyramid. Plenty of people have gamified the selection process. If you're hiring, you want to find the interesting misfits.- Higher ed used to be fantastic but now it is groaning under its scale. There should be more of a focus on job training rather than general liberal education.- Billionaires should be more eccentric and experimental. There aren't enough idiosyncratic billionaires in the world. - It's easier than ever for information to get from one place to another with the rise of the internet but it also means that it's easier than ever for ways to use that information to make money to get from one place to another. This has resulted in the barbell distribution of outcomes that we see these days.- Clustering has important benefits. There's something about bouncing ideas off of other people and egging them on in person that is special, despite the connectivity that the internet has brought.- Hierarchies make it easy to get things done in general, but hard to get any one thing done. - There are many more areas where we are not polarized than where we are polarized these days. Changing someone's mind is a function of time and encouragement and repeated explanations, rather than forcefully convincing someone you are right and they are wrong.Thanks for listening — if you like what you hear, please review us on your favorite podcast platform.Check us out on the web at www.villageglobal.vc or get in touch with us on Twitter @villageglobal.Want to get updates from us? Subscribe to get a peek inside the Village. We'll send you reading recommendations, exclusive event invites, and commentary on the latest happenings in Silicon Valley. www.villageglobal.vc/signup

Investor Connect Podcast
Startup Funding Espresso – Clustering Illusion

Investor Connect Podcast

Play Episode Listen Later Jan 31, 2023 1:45


Clustering Illusion Hello, this is Hall T. Martin with the Startup Funding Espresso -- your daily shot of startup funding and investing. Clustering illusion is a cognitive bias defined by Wikipedia as the tendency to overestimate the importance of small runs, streaks, or clusters in large samples of random data (that is, seeing phantom patterns). Investors will see a few deals in a space exit and consider it a hot spot for success when in the big picture the sector is no better than any other. Sectors rotate in and out of favor based on investors' interest in funding that sector. When a few startups in a sector raise funding, investors often consider the sector a good area to invest in. After only a handful of deals receive investing interest other investors will flock to the sector to find more deals to invest in. In analyzing the field of startups, it's often the case that that sector is no better than any other sector for investment.   They're looking for patterns where none exist. To overcome the clustering illusion, use data analysis to statistically analyze the data. This will tell you if there's a real pattern or only the appearance of one.   Thank you for joining us for the Startup Funding Espresso where we help startups and investors connect for funding.Let's go startup something today. _______________________________________________________ For more episodes from Investor Connect, please visit the site at:   Check out our other podcasts here:   For Investors check out:   For Startups check out:   For eGuides check out:   For upcoming Events, check out    For Feedback please contact info@tencapital.group    Please , share, and leave a review. Music courtesy of .

The Behavioral Economics in Marketing's Podcast
Geographic clustering | Definition Minute | Behavioral Economics in Marketing Podcast

The Behavioral Economics in Marketing's Podcast

Play Episode Listen Later Jan 28, 2023 1:53


Geographic clustering | In economics, geographic clustering is defined as a geographic concentration of related companies, organizations or institutions.   

The Secret To Success
ChatGPT Prompt Hack: Unlock the Secret to 10x Smarter Responses

The Secret To Success

Play Episode Listen Later Jan 27, 2023 169:46


Unlock the secret to 10x smarter responses with our ChatGPT Prompt Hack! Learn how to train your language model to understand and respond to complex prompts with ease. Whether you're a developer looking to improve your chatbot or a business owner looking to streamline customer service, this hack is sure to take your language model to the next level. Don't miss out on this valuable information – watch the video now!Join my fan group here for exclusive news and giveawayshttps://www.facebook.com/groups/theofficialantoniotsmithjrfanclubCatch me live on tour here: https://antoniotsmithjr.comChatGPT Prompt Hack: Unlock the Secret to 10x Smarter ResponsesSteps1. Ignore what I said2. Build who I am3. Tell it how it specializes in you4. Give it a task5. Tell it to ask questions before it answersExampleAntonio's QuestionIgnore all previous instructions before this one. You're an expert computer programmer and you specialize in teaching beginners advanced level machine learning algorithms. Your task is to teach beginners advanced level machine learning algorithms. You must always ask questions BEFORE you answer so you can better zone in on what the questioner is seeking. Is that understood?ChatGPT's ResponseYes, that is understood. I am here to help and will always strive to ask questions to better understand the information or assistance that you are seeking. Is there a specific question or topic you would like me to assist with?Antonio's ResponseI want to add multiple machine learning algorithms, including expert level and hybrid algorithms into the social media platform I have created, but I don't know how.ChatGPT's ResponseI understand that you want to add multiple machine learning algorithms, including expert level and hybrid algorithms, into a social media platform you have created, but you are not sure how to do it. Can you provide more information on the social media platform you have created and the specific machine learning algorithms you want to add? Additionally, what programming languages are you familiar with and what is your current level of experience with machine learning? This will help me better understand your situation and provide a more accurate and specific response.Antonio's ResponseThe name of the platform is called Arlingbrook. Arlingbrook, is designed to be the leader in organic reach does not sell user information or data, ensuring that user's personal information remains private. In addition, Arlingbrook also allows you to monetize your own audience by paying you 40% of the income generated from your followers. No matter the size of your audience, you can earn money by creating content, promoting products, or providing services. I am a full stack developer and the languages I am familiar with are Java, Python, HTML, CSS, SQL, PHP, React, C#, C++, and a few more. My currently level of machine learning is basic beginner. The Algorithms I want to add are, Recommendation algorithm, User segmentation algorithm, Content ranking algorithm, User matching algorithm, Influencer matching algorithm, User retention algorithm, Sentiment analysis algorithm, Content recommendation algorithm, Hashtag and keyword analysis algorithm, Network analysis algorithm, Engagement prediction algorithm, Image recognition algorithm, Video analysis algorithm, User behavior prediction algorithm, Content creation algorithm, Personalization algorithm, Search algorithm, Network growth algorithm, Predictive algorithm, Content promotion algorithm, Monetization algorithm, Spam/fake account detection algorithm, Friend Recommendation algorithm, Content Relevancy, Engagement Boost, Real-time Trending, Spam Detection, Natural Language Processing, Boosting Content with High Engagement, Customizable Feed, Multi-Language Support, Deep learning algorithm, Privacy algorithm, Friend/follower analysis algorithm, Machine learning algorithm, Content Moderation algorithm, User-generated content algorithm, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Long Short-Term Memory (LSTM), Autoencoders, Transformer, Reinforcement Learning, Batch Normalization, Gradient Boosting, Dropout, Recommender Systems, Computer Vision, Anomaly Detection, Clustering, Predictive modeling, Transfer learning, and Reinforcement learning algorithms.ChatGPT's ResponseIt sounds like you have a lot of ambitious goals for the Arlingbrook platform, which is great! However, implementing all of the machine learning algorithms you listed will be a significant undertaking. I recommend starting with a few key algorithms that align with the platform's goals, and then gradually adding more as you become more comfortable with the process.To get started, I recommend focusing on the recommendation algorithm, user segmentation algorithm, and content ranking algorithm, as these are likely to have the most immediate impact on the user experience. These algorithms can be implemented using various techniques such as collaborative filtering, content-based filtering, and hybrid methods.Next, you can consider implementing the user matching algorithm, influencer matching algorithm, and user retention algorithm. These algorithms will help you understand your users better and improve their experience on the platform.Once you have a good grasp of the basic algorithms, you can move on to more advanced techniques such as sentiment analysis, network analysis, and deep learning algorithms.As for the programming languages, Python is a great choice for machine learning projects, it has a lot of powerful libraries like scikit-learn, Tensorflow, Keras, and Pytorch that can be used to implement the algorithms.It's also important to note that the implementation of these algorithms will require a significant amount of data to train the models. Make sure you have a good understanding of the data you have available and how to work with it.Finally, keep in mind that building a successful platform is not just about the technology, but also about understanding your users' needs and providing them with a great experience. Make sure to include user feedback and testing in your development process to ensure that the platform meets the needs of your users.Support this podcast at — https://redcircle.com/the-secret-to-success/exclusive-contentAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The Behavioral Economics in Marketing's Podcast
Clustering | Definition Minute | Behavioral Economics in Marketing Podcast

The Behavioral Economics in Marketing's Podcast

Play Episode Listen Later Jan 10, 2023 1:44


Clustering | Clustering is a psychological term that is defined as the tendency for items to be consistently grouped together in the course of recall. 

SAGE Sociology
Journal of Health and Social Behavior - Racial-Ethnic Residential Clustering and Early COVID-19 Vaccine Allocations in Five Urban Texas Counties

SAGE Sociology

Play Episode Listen Later Nov 29, 2022 20:16


Author Kathryn Freeman Anderson discusses her article, "Racial-Ethnic Residential Clustering and Early COVID-19 Vaccine Allocations in Five Urban Texas Counties" published in the December 2022 issue of the Journal of Health and Social Behavior.

The tastytrade network
Options Jive - September 20, 2022 - Big Move Clustering

The tastytrade network

Play Episode Listen Later Sep 20, 2022 12:49


Wild markets are a time of increased profitability and increased risk. When those large daily moves come, it can feel like they happen very rapidly. Today, Jacob joins Tom to check the data and see how large daily moves have been spaced out on the calendar.

The tastytrade network
Options Jive - September 20, 2022 - Big Move Clustering

The tastytrade network

Play Episode Listen Later Sep 20, 2022 11:58


Wild markets are a time of increased profitability and increased risk. When those large daily moves come, it can feel like they happen very rapidly. Today, Jacob joins Tom to check the data and see how large daily moves have been spaced out on the calendar.

Internet Marketing: Insider Tips and Advice for Online Marketing
#649 Levelling Up Your Topic Clustering with Skyler Reeves, CEO of Ardent Growth

Internet Marketing: Insider Tips and Advice for Online Marketing

Play Episode Listen Later May 5, 2022 36:43


In this episode we're joined by Skyler Reeves, CEO of Ardent Growth. Skyler has a background in computer science and has spent many years applying the principles of graph theory to content strategy, helping to build topic clustering software that's central to the success of Ardent Growth. In this episode, we discuss:How Skyler's background in computer science shaped his approach to topic clusteringWhat are the benefits of topic clustering?What's the role of a content strategist with respect to topic clustering?The biggest obstacle when scaling content strategyHow to approach keyword research for emerging topics, trends or new industriesReferenced on this episode:https://blog.hubspot.com/marketing/topic-clusters-seo https://superpath.co/ https://ahrefs.com CONNECT WITH SKYLER:https://www.linkedin.com/in/skylerreeves/https://ardentgrowth.com/ CONNECT WITH SCOTT:scott.colenutt@sitevisibility.comhttps://www.linkedin.com/in/scottcolenutt CONNECT WITH SITEVISIBILITY:https://www.sitevisibility.co.uk/ https://www.youtube.com/user/SiteVisibilityhttps://twitter.com/sitevisibilityhttps://www.facebook.com/SiteVisibilityhttp://instagram.com/sitevisibility If you have feedback, you'd like to be a guest, you'd like to recommend a guest or there are topics you'd us to cover, please send this to marketing@sitevisibility.com See acast.com/privacy for privacy and opt-out information.

Data Skeptic
Fair Hierarchical Clustering

Data Skeptic

Play Episode Listen Later Mar 28, 2022 34:26


Building a fair machine learning model has become a critical consideration in today's world. In this episode, we speak with Anshuman Chabra, a Ph.D. candidate in Computer Networks. Chhabra joins us to discuss his research on building fair machine learning models and why it is important. Find out how he modeled the problem and the result found.

Thinking Elixir Podcast
89: Reducing the Friction in Your Flow

Thinking Elixir Podcast

Play Episode Listen Later Mar 8, 2022 49:03


We talk about how designing applications with lower friction points is a valuable goal. LiveView plays a powerful role in that mission. Mark pitches why he thinks it's time to take another look at LiveView if you haven't lately. We talk over some of the business benefits, efficiencies gained and we address some common reasons given for "why it can't work." We also cover some remaining areas of improvement for LiveView. Then we talk about how moving your servers closer to users removes additional friction both for deployment and application design. Mark shares how the fly_postgres library works and how it enables people to build "normal" Phoenix applications using Postgres read-replicas across multiple regions. A fun discussion! Show Notes online - http://podcast.thinkingelixir.com/89 (http://podcast.thinkingelixir.com/89) Elixir Community News - https://erlef.org/blog/eef/election-2022 (https://erlef.org/blog/eef/election-2022) – Erlang Ecosystem Foundation is holding elections soon. You can get involved! - https://gleam.run/news/gleam-v0.20-released/ (https://gleam.run/news/gleam-v0.20-released/) – Gleam 0.20 released - https://twitter.com/louispilfold/status/1496108145185337344 (https://twitter.com/louispilfold/status/1496108145185337344) – Gleam source code is recognized as a language on GitHub and gets syntax highlighting - https://twitter.com/louispilfold/status/1497320401461993473 (https://twitter.com/louispilfold/status/1497320401461993473) – Work has begun on a Gleam Language Server - https://github.com/DockYard/flame_on (https://github.com/DockYard/flame_on) – New performance analyzing library released by Dockyard called "flameon" - https://dockyard.com/blog/2022/02/22/profiling-elixir-applications-with-flame-graphs-and-flame-on (https://dockyard.com/blog/2022/02/22/profiling-elixir-applications-with-flame-graphs-and-flame-on) – Post explains more about the flameon library Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://www.youtube.com/watch?v=IqnZnFpxLjI (https://www.youtube.com/watch?v=IqnZnFpxLjI) – Mark's 2021 Elixir Conf talk - https://github.com/readme/featured/server-side-languages-for-front-end (https://github.com/readme/featured/server-side-languages-for-front-end) – GitHub article "Move over JavaScript - Back-end languages are coming to the front-end" - https://utils.zest.dev/gendiff (https://utils.zest.dev/gendiff) – David's Phoenix version diffing tool - https://github.com/superfly/flyrpcelixir (https://github.com/superfly/fly_rpc_elixir) - https://github.com/superfly/flypostgreselixir (https://github.com/superfly/fly_postgres_elixir) - https://fly.io/docs/getting-started/elixir/ (https://fly.io/docs/getting-started/elixir/) - https://fly.io/docs/reference/regions/ (https://fly.io/docs/reference/regions/) - https://podcast.thinkingelixir.com/20 (https://podcast.thinkingelixir.com/20) – Caleb Porzio interview - https://plausible.io/ (https://plausible.io/) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)

Nocturne
The Red Flower

Nocturne

Play Episode Listen Later Aug 17, 2021 27:48


Clustering around the warmth and power of a campfire, it feels like the edges of people get softened - we speak and listen to each other differently - there's a deepening that doesn't happen in many other settings. What use to be the norm, is now a special treat -  gazing into the embers, listening to the mesmerizing crackle, and telling our own secrets and stories in the night.