POPULARITY
Categories
Construir una marca personal sólida es mucho más que tener seguidores. Es conectar, emocionar, compartir tu proceso y, sobre todo, ser genuino. Alex Blanco lo sabe bien: es creador de contenido con más de una década de experiencia en redes sociales, y hoy comparte su mirada sobre lo que realmente implica construir una marca personal desde cero. “Llevo 13 o 14 años haciendo contenido en redes sociales y en YouTube. Me ha ido muy bien y muy mal. He vivido la experiencia de tener muchos seguidores, ver qué se da bien y lograr monetizar con las redes sociales y vivir de las mismas”, cuenta nuestro invitado. Para Alex, no hay mejor plataforma para desarrollarse como creador que YouTube: “Puedes desarrollar tu marca personal de la mejor forma posible porque la gente se va a conectar mejor contigo en un video de 10 o 15 minutos, va a conocer tu forma de pensar, qué tan genuino eres frente a cámara o puedes tener un personaje armado. Y, si eres bueno en YouTube, la gente te va a querer seguir en Instagram automáticamente”. La profundidad del contenido, el tiempo de atención y el vínculo que se genera a través del video permiten mostrar algo que no se logra en otros formatos: autenticidad. Pero empezar no siempre es fácil. Las dudas, las comparaciones y el miedo al qué dirán suelen frenar. “No estén escuchando a todos esos influencers decir que de tal o cual forma no van a ver tu video… No importa, hazlo como puedas, pero hazlo para que la gente te conozca. Sino no vas a empezar nunca”, advierte nuestro experto. Porque para avanzar, hay que hacer, incluso si no es perfecto. Muchos subestiman todo lo que hay detrás de un video. “La gente no sabe todo el trabajo y el proceso creativo detrás de la creación de un video”, lamenta Alex. Desde la estrategia hasta los detalles técnicos, todo cuenta. “Para empezar en YouTube tienes que tener en claro cómo funciona la plataforma. Tienes que saber que lo más importante es la miniatura, el título y los primeros 7 a 30 segundos del video”, sostiene nuestro invitado. Y a la hora de crear, es clave observar y aprender: “Trato de inspirarme de creadores o la industria a la que le está yendo bien. También veo videos temáticos que se hayan convertido en virales para ver qué es lo que le llama más la atención a la gente”. Más allá de las métricas, hay algo que no se puede dejar de lado: generar una emoción. “La gente no olvida lo que le dices, sino cómo la hiciste sentir. Puedes hacer el mejor video del mundo, pero si no les hiciste sentir nada, no tiene sentido. Hay que darle una razón a la audiencia para que se quede en el video”, subraya nuestro especialista. Y cuando el objetivo es vivir de las redes, la estrategia tiene que ir acompañada de conocimiento. “Tienes que estudiar mucho para poder monetizar en las redes sociales, mientras paralelamente actuamos. También puedes hacerlo con Google AdSense y monetizar con la publicidad que aparece en YouTube”, comenta Alex, pero aclara: “No piensen en dinero cuando empiecen con las redes sociales. Y si van con su canal de YouTube, vayan en serio, porque no van a monetizar en los primeros 6 meses. Van a construir su marca personal, a grabar su proceso, crear una comunidad y finalmente podrán vender su marca”. En ese camino, la constancia es clave. “Tienes que tener una galería de videos que te va a generar un ingreso mensual que te va a permitir vivir de YouTube. Mientras tengas más contenido, te va a ir mejor. Pero para llegar a ese punto puede pasar de un año a cinco años. El fracaso o te quiebra el espíritu o lo ves como un aprendizaje”, resalta nuestro experto. Y para cerrar, una gran verdad que rompe el mito de los números: “No tienes que tener 100 mil seguidores para vivir de las redes sociales. Puedes tener 1000 seguidores y tener 500 seguidores fieles y con ellos poder generar ingresos”, asegura Alex. Crear una marca personal es una carrera de fondo. No se trata de volverse viral, sino de construir algo propio, valioso y duradero. Paso a paso, video a video. Instagram: @alexwhiteworld / @acmediatrend Youtube: @Alexwhiteworld
In this episode, Amit and Dheeraj dive deep into the world of AI reasoning models with Alex, an AI researcher involved in OpenThinker and OpenThoughts. They explore two recent groundbreaking papers—SkyT1 and S1 (Simple Test Time Scaling)—that showcase new insights into how large language models (LLMs) develop reasoning capabilities.From structured reasoning vs. content accuracy to fine-tuning efficiency and the role of active learning, this conversation highlights the shift from prompt engineering to structured supervised fine-tuning (SFT) and post-training techniques. The discussion also touches on open weights, open data, and open-source AI, revealing the evolving AI landscape and its impact on startups, research, and beyond.Key Topics & Chapter Markers[00:00] Introduction – Why reasoning models matter & today's agenda[05:15] Breaking Down SkyT1 – Structure vs. Content in reasoning[15:45] Open weights, open data, and open-source AI[22:30] Fine-tuning vs. RL – When do you need reinforcement learning?[30:10] S1 and the power of test-time scaling[40:25] Budget forcing – Making AI "think" more efficiently[50:50] RAG vs. SFT – What should startups use?[01:05:30] Active learning – AI asking the right questions[01:15:00] Final thoughts – Where AI reasoning is heading nextResources & Links
This is the second part of episode 10 of Effortless Podcast, hosts Dheeraj Pandey and Amit Prakash sit down with Alex Dimakis, a renowned AI researcher and professor, to discuss one of the biggest breakthroughs in open AI models—DeepSeek R1. They explore how DeepSeek's innovations in reasoning, reinforcement learning, and efficiency optimizations are reshaping the AI landscape.The conversation covers the shift from large, proprietary AI models to open-source alternatives, the role of post-training fine-tuning, and how reinforcement learning (GRPO) enables reasoning capabilities in LLMs. They also dive into KV caching, mixture of experts, multi-token prediction, and what this means for NVIDIA, hardware players, and AI startups.Key Topics & Timestamps:[00:00] - Introduction & Why DeepSeek Matters[01:30] - DeepSeek R1: Open-Source AI Disrupting the Industry[03:00] - Has China Become an AI Innovator?[07:30] - Open Weights vs. Open Data: What Really Matters?[10:00] - KV Caching, Mixture of Experts & Model Optimizations[21:00] - How Reinforcement Learning (GRPO) Enables Reasoning[32:00] - Why OpenAI is Keeping Its Reasoning Traces Hidden[45:00] - The Impact of AI on NVIDIA & Hardware Demand[1:02:00] - AGI: Language Models vs. Multimodal AI[1:15:00] - The Future of AI: Fine-Tuning, Open-Source & Specialized ModelsHosts:Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.Guest:Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.Follow the Hosts and the Guest:Dheeraj Pandey:LinkedIn - https://www.linkedin.com/in/dpandeyTwitter - https://x.com/dheerajAmit Prakash:LinkedIn - https://www.linkedin.com/in/amit-prak...Twitter - https://x.com/amitp42Alex Dimakis:LinkedIn - https://www.linkedin.com/in/alex-dima...Twitter - https://x.com/AlexGDimakisShare Your Thoughts:Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.comDon't forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!
In this episode of the Effortless Podcast, hosts Dheeraj Pandey and Amit Prakash sit down with Alex Dimakis, a renowned AI researcher and professor at UC Berkeley. With a background in deep learning, graphical models, and foundational AI frameworks, Alex provides unparalleled insights into the evolving landscape of AI.The discussion delves into the detailing of foundation models, modular AI architectures, fine-tuning, and the role of synthetic data in post-training. They also explore practical applications, challenges in creating reasoning frameworks, and the future of AI specialization and generalization.As Alex puts it, "To deep seek or not, that's the $1 trillion question." Tune in to hear his take on how companies can bridge the gap between large generalist models and smaller specialized agents to achieve meaningful AI outcomes.Key Topics and Chapter Markers:Introduction to Alex Dimakis & His Journey [0:00]From Foundation Models to Modular AI Systems [6:00]Fine-Tuning vs. Prompting: Understanding Post-Training [15:00]Synthetic Data in AI Development: Challenges and Solutions [25:00]The Role of Reasoning and Chain of Thought in AI [45:00]AI's Future: Specialized Models vs. General Systems [1:05:00]Alex's Reflections on AI Research and Innovation [1:20:00]Hosts:Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.Guest:Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.Follow the Hosts and the Guest:Dheeraj Pandey:LinkedIn: Dheeraj PandeyTwitter: @dheeraj Amit Prakash:LinkedIn: Amit PrakashTwitter: @amitp42 Alex Shtoyanov:LinkedIn: Alex DimakisTwitter: @AlexGDimakisShare Your Thoughts:Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.comDon't forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!
In this week's First $1,000 segment, we meet a web developer who turned his interest in ergonomic office furniture into a profitable side hustle. We explore how he built multiple niche websites that generated tens of thousands in commissions through Google AdSense.Side Hustle School features a new episode EVERY DAY, featuring detailed case studies of people who earn extra money without quitting their job. This year, the show includes free guided lessons and listener Q&A several days each week.Show notes: SideHustleSchool.comEmail: team@sidehustleschool.comBe on the show: SideHustleSchool.com/questionsConnect on Instagram: @193countriesVisit Chris's main site: ChrisGuillebeau.comRead A Year of Mental Health: yearofmentalhealth.substack.comIf you're enjoying the show, please pass it along! It's free and has been published every single day since January 1, 2017. We're also very grateful for your five-star ratings—it shows that people are listening and looking forward to new episodes.
In this special guest episode of the Effortless Podcast, Amit Prakash sits down with Rajat Monga, the creator of TensorFlow and current Corporate Vice President of Engineering at Microsoft. With a career spanning Google Brain, founding Inference, and leading AI inferencing at Microsoft, Rajat offers a unique perspective on the evolution of AI. The conversation dives into TensorFlow's revolutionary impact, the challenges of building startups, the rise of PyTorch, the future of inferencing, and how transformative tools like GPT-4 and OpenAI's Gemini are reshaping the AI landscape.Key Topics and Chapter Markers:Introduction to Rajat Monga & TensorFlow Legacy [0:00]The inflection points in AI: TensorFlow's role and challenges [6:00]PyTorch vs. TensorFlow: A tale of shifting paradigms [16:00]The startup journey: Building Inference and lessons learned [27:00]Exploring O1 and advancements in reasoning frameworks [54:00]AI inference: Cost optimizations and hardware innovations [57:00]Agents, trust, and validation: AI in decision-making workflows [1:05:00]Rajat's personal journey: Tools for resilience and finding balance [1:20:00] Host:Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, and a PhD in Computer Engineering. Amit has a strong track record in analytics, machine learning, and large-scale systems. Follow Amit on:LinkedIn - https://www.linkedin.com/in/amit-prakash-50719a2/ X (Twitter) - https://x.com/amitp42 Guest:Rajat Monga: He is a pioneer in the AI industry, best known as the co-creator of TensorFlow. He has held senior roles at Google Brain and Microsoft, shaping the foundational tools that power today's AI systems. Rajat also co-founded Inference, a startup focused on anomaly detection in data analytics. At Microsoft, he leads AI software engineering, advancing inferencing infrastructure for the next generation of AI applications. He holds a Btech Degree from IIT, Delhi. Follow Rajat on:LinkedIn - https://www.linkedin.com/in/rajatmonga/ X (Twitter) - https://twitter.com/rajatmonga Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com Email: EffortlessPodcastHQ@gmail.com
In this episode, Gokul Rajaram illuminates how eng leaders can better impact business outcomes & become key players in business strategy! We also address strategies for better goal setting & decision making, shifting to a customer-centric structure, recommendations for building alignment between cross-functional groups, positive collaboration between product & engineering, and how to achieve greater productivity.ABOUT GOKUL RAJARAMGokul Rajaram is an investor and company helper. He serves on the boards on Coinbase, Pinterest and The Trade Desk. Most recently, he was an executive at DoorDash, a food ordering platform. Prior to DoorDash, he worked at Block as Product Engineering Lead, where he led several product development teams and served on Block's executive team. Prior to Block, he served as Product Director of Ads at Facebook, where he helped Facebook transition its advertising business to become mobile-first. Earlier in his career, Gokul served as a Product Management Director for Google AdSense, where he helped launch the product and grow it into a substantial portion of Google's business. Gokul is also on the board of The Trade Desk and Coinbase. Gokul holds a bachelor's degree in Computer Science Engineering from the Indian Institute of Technology, Kanpur where he received the President's Gold Medal for being class valedictorian. He also holds an M.B.A. from The Massachusetts Institute of Technology and a Master of Computer Science from the University of Texas at Austin, where he received the MCD University Fellowship.SHOW NOTES:Why it's critical for eng leaders to drive business outcomes (2:04)How to shift your leadership to impact organizational changes (4:18)Facilitating conversations around setting numerical / time-driven goals (6:56)Navigating the shift to a customer-centric structure & approach (9:18)Recommendations for building around this customer-oriented model (12:04)Decision-making strategies when approaching customer outcomes (13:25)Understand the role of confidence in the decision-making process (15:40)Challenges faced by eng leaders when making this customer-centric shift (18:06)How eng leaders can introduce / reinforce accountability in eng orgs (20:11)Bridging the gap between PMs & eng leaders (23:07)Challenges / dysfunctions that prevent product & engineering alignment (25:01)Establishing trust, open dialogue, and mutual respect from the get-go (26:39)Communication frameworks that increase alignment between product & eng (32:39)How eng leaders can better approach “move fast & break things” demand (35:37)Rapid fire questions (40:19)LINKS AND RESOURCESGokul's website - Contains a collection of Gokul's writing that covers a wide range of topics related to product development, hiring, strategy, leadership, and more!The Mistborn Saga - Brandon Sanderson's high fantasy saga which chronicles the efforts of a secret group of Allomancers who attempt to overthrow a dystopian empire and establish themselves in a world covered by ash.This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
You have Bitcoin...so...now what? 1. HODL Hold your Bitcoin. Wait for the price to go up. Set it and forget it. Highest price ~$20k Been fluctuating between 5 and 7k 2. Trade AltCoins The new and improved HYIP Startup ICOs . Initial Coin Offering.Like IPO but no regulation (like playing penny stocks) Exchange platofrms: HitBTC, Binance, Bitrex Litecoin, Ethereum, Monero, Dash, Ripple View top coins: https://www.coingecko.com/en 3. HYIPs Get in early, get out fast Find HYIP programs: * http://www.allhyipmonitors.com/ How long its been paying Top list of Most Stable, Fastest growing Comment section Diversity Fund Better Bits Club Passive-Earner Monitor Active group on Facebook 4. Own your own faucet Make money off advertising. Look for Faucet Friendly Advertisers. Google AdSense is NOT. Faucet WordPress Plugin Lvl 14: Make a Website the Fast, Easy, & Cheap Way Faucethub micro wallet payment processor to store crypto for auto payments/faucet claims Insert captcha to prevent bots 5. Share Affiliate Links To exchanges, ebooks, faucets , etc. On your faucet site or blog Lvl 13: Procrasti-logging (Top 10 Ways to Write Content for Your Niche Without Even Trying) 6. Gamble on crypto casinos or sports betting mBit, Fortune Jack, CloudBet - NOT available in US Fibonacci Numbers. a series of numbers in which each number ( Fibonacci number ) is the sum of the two preceding numbers. The simplest is the series 1, 1, 2, 3, 5, 8, Red/Black roulette = 50% chance to win 7. Cloud Mining Contracts Hashflare, Genesis Mining Lvl 38: A Beginner & Lazy Man's Guide to Bitcoin 8. Actually Use it Buy Stuff Overstock, Microsoft Store, Newegg, Cheap Air (Expedia replacement) Gyft Send it to Family/Friends Donate to Charity Red Cross, Save the Children, Internet Archive (Archive.org), Electronic Frontier Foundation Next week… ANOTHER BOSS BATTLE! Went from coffee shop-loving hipster to coffee shop loving hipster that makes some bank Colby Williams owner and founder of Perengo Coffee and author of Small Town Big Money Music: Star Wars Theme Metal Version by Galactic Empire Disclaimer: As a matter of transparency, some of the links provided are referral links and I do receive commision if you sign up and make a purchase through them. Your support helps a lot! May the Fourth Be With You! And Feliz Cinco de Mayo! Make Money Sports Betting with ZERO Investment Blog:* https://www.procrastin8r.com/blog/category/sports-betting
In the latest episode of our podcast, we sit down with Gokul Rajaram - a Product Leader, Operator and Board member who has helped build seven of the largest tech companies globally - Google, Facebook, Square, DoorDash, Coinbase, Pinterest and Trade Desk. He is popularly known as the 'Godfather of Google Adsense', here he grew it from zero to over $1 billion in revenue. Later, he founded an NLP company which was acquired by Facebook, where he then led the Ads Product team as Product Director, helping grow revenues from $0.75 billion to $6.5 billion, and helped Facebook transition its advertising business to become mobile-first. He helped Square, DoorDash and Coinbase go public (IPO) as management team and board member, additionally he is a prolific Angel Investor for 300+ startups including Airtable, Airtable, CRED, Curefit, Figma, Learneo, Pigment, Postman, Whatfix and more. In this episode, Gokul shares invaluable insights on how to grow from startup to scale-up quoting stories from his rich experience. He stresses the importance of product-market fit (PMF), exploring its critical link to monetization and sound unit economics.He also addresses the formidable challenges startups face in the fiercely competitive AI sector and how can young entrepreneurs build in this exciting sector. In this podcast, below are the topics covered:0:00 - Journey from India to Silicon Valley8:10 - Three Stages of a Company: Start-up, Early-Growth, Scale-up13:41 - Discovering Product Market Fit and Monetization23:23 - Challenges for Startups in AI28:20 - Vertical SaaS and Indian Tech InnovationGokul offers a masterclass in entrepreneurial excellence - his experiences and strategies provide a roadmap for navigating the complex and ever-evolving tech landscape, making this episode a must-listen for aspiring entrepreneurs and seasoned professionals alike.Enjoyed the podcast? Please consider leaving a review on Apple Podcasts and subscribe wherever you are listening to this.Follow Prime Venture Partners:LinkedIn: https://www.linkedin.com/company/primevp/Twitter: https://twitter.com/Primevp_inThis podcast is for you. Do let us know what you like about the podcast, what you don't like, the guests you'd like to have on the podcast and the topics you'd like us to cover in future episodes.Please share your feedback here: https://primevp.in/podcastfeedback
If you've ever faced the challenge of balancing ad placement with maintaining a clean, user-friendly website, you're not alone. Many website owners struggle to determine where ads should go without disrupting their design or frustrating their audience. Google AdSense's Auto Ads feature takes the guesswork out of this process by automatically placing ads where they …
E-Book & Courses: https://linktr.ee/doctortkpsych1. 2:35 – DTA2. 3:35 - DTA PLAT3. 4:35 - Aligned Marketing Academy4. 6:55 - MVP Coaching 5. 8:35 - Live Mastermind 6. 10:48 - 60 Digital Product Ideas Ebook - 7. 11:22 - Unleash Your Inner Boss8. 12:05 - Time Audit & Dream Schedule - 9. 13:10 - Abundant Vacation Guide10. 14:00 - Private Practice Playbook11. 16:30 - Business By Design12. 19:20 - Kajabi - https://app.kajabi.com/r/FGQjqxYr13. 21:07 - Converkit - https://convertkit.com/?lmref=HYBVyg14. 22:27 - Hello Audio - https://helloaudio.fm/?fpr=tekesia6515. 23:47 - Amazon - https://www.amazon.com/shop/doctortkpsych16. 25:05 - Quickbooks 17. 25:46 - Gusto - https://gusto.com/r/tekesia218. 26:34 - Google Adsense 19. 27:23 - Instagram Bonuses 20. 27:51 - Simple Practice - https://www.simplepractice.com/referral?p=f407ed71d221. 29:12 - Monday - https://mondaycom.grsm.io/lnnqg0jvv50wConnect with Doctor TK On InstagramOn Youtube
How and where can you find joy in your life and in your work? In this empowering episode, Rachel Marie Martin opens up about her remarkable journey of finding joy, embracing vulnerability, and harnessing creativity to manifest financial independence and success. Tune in now to this enlightening episode to gain valuable insights and motivation. Don't miss out on the heartfelt wisdom Rachel has to share! What You Will Learn How did Rachel Marie Martin's upbringing in the 80s and her relationship with her father influence her skills in technology and emotional intelligence? What motivated Rachel to transition from writing as a personal outlet to developing her blog, Finding Joy, as an entrepreneurial venture? In what ways did early adoption of platforms like Twitter contribute to building a community for Rachel's blog? How did Google AdSense change the financial trajectory for her and her family? Rachel discusses reshaping her relationship with money. What steps did Rachel take in reshaping her relationship with money? How can individuals practice "finding joy as a posture of the heart " in their daily lives? What did Rachel mean by "everyone has the potential to rekindle their inner spark"? How did the blogging community help Rachel, and how can similar communities aid other aspiring entrepreneurs? How does Rachel Marie Martin's book "Get Your Spark Back” extend the themes discussed in this podcast episode? What strategies can individuals employ to push beyond their perceived limitations and strive for financial growth and personal fulfillment? Connect with Katrina Brees Rachel Marie Martin on Instagram Rachel's Blog on Facebook Rachel Marie Martin on LinkedIn Resources Wickedly Smart Women: Trusting Intuition, Taking Action, Transforming Worlds by Anjel B. Hartwell Connect with Anjel B. Hartwell Wickedly Smart Women Wickedly Smart Women on X Wickedly Smart Women on Instagram Wickedly Smart Women Facebook Community Wickedly Smart Women Store on TeePublic The Wealthy Life Mentor The Wealthy Life Mentor on Facebook Listener Line (540) 402-0043 Ext. 4343 Email listeners@wickedlysmartwomen.com
In this episode, we delve into the world of Google PPC marketing and share 10 invaluable tips to elevate your strategy. From split testing ad campaigns to managing revenue lag time on Google AdSense.Learn how to effectively target customer groups, optimize product categories, and identify your best and worst selling products. Discover the importance of analyzing product performance and margins, and how to support your merchandising teams through PPC. We also explore the significance of campaign assets and data input in Google AdWords, as well as using PPC to drive email flows. Plus, we discuss the nuances of brand vs non-brand marketing and how to run successful sales without devaluing your brand. Whether you're a beginner or seasoned marketer, this episode will help you navigate the complexities of Google marketing with ease.Reach out to the Spec team here: https://spec.digital/expertise/ppc-services/Key takeaways:0:00 10 Tips for Better Marketing on Google2:09 Why split test ad campaigns?5:02 How to manage revenue lag time on Google Adsense7:00 Different campaign splits to test10:41 Customer Groups and how to target them13:24 Product categories15:16 Worst & best selling products17:45 Product Performance and Margins21:09 Helping Merchandising Teams Through PPC23:32 Campaign Assets: What they are and what to do with them28:55 Data input and targeting on Google AdWords31:20 Using PPC to drive Email Flows37:10 Brand vs Non-Brand Marketing41:13 How to put on a sale without devaluing your brandSign up to our next webinar here: https://www.eventbrite.co.uk/e/from-0-to-10m-on-shopify-skyrocket-your-stores-revenue-tickets-861405435847?utm-campaign=social&utm-content=attendeeshare&utm-medium=discovery&utm-term=listing&utm-source=cp&aff=ebdsshcopyurl Website: https://winningwithshopify.com/YouTube: https://www.youtube.com/@winningwithshopifyInstagram: https://www.instagram.com/winning_with_shopify/TikTok: https://www.tiktok.com/@winningwithshopifySupport the Show.
Welcome back to a new episode of the Niche Pursuits News podcast! Jared and Spencer, as always, offer an insightful take on the latest happenings with SEO, publishers, Google, and AI, and give a peek behind the curtain at their latest projects. Stick around for a laugh when they reveal the weird niche sites they found. The first order of business is to mention that the March Core Update has still not finished rolling out and Spencer and Jared decline to guess when that might be. Moving along, they jump into the first big news item: The Verge trolling Google, once again.The Verge published a new article for 2024 “reviewing printers,” when in reality the article is just mocking Google for how they rank articles. Did The Verge actually test the printers? What outbound links did they include? What did John Mueller have to say about it? And most importantly, how's the article ranking? Find out the amusing, and depressing, story behind this news item. Moving along to another article in The Verge, Spencer and Jared talk about how Google is experimenting with a small subset of news publishers, removing their links from Google News. It's doing this because California is thinking about forcing Google to pay its news publishers. What are the implications for publishers? Is there any precedent in these types of situations? What happened in Australia? And will Google extend this to other areas? In other Google news, the company is launching a new ad product whereby ads appear inside your content, as hyperlinked text, which takes users to advertisers' products. This is optional for publishers within Google AdSense. What does Spencer think? Would he enable this on his own websites? What scenarios does Jared share where this might work? Tune in to find out. The latest news item is that Brave Search has released an Answer with AI feature. What do Spencer and Jared think of it and its impact? What does it remind them of? Moving along to the Shiny Object Shenanigans portion of the podcast, Spencer talks about his Amazon Influencer side hustle and his decision to publish those shopping videos on YouTube. Incredibly, his channel disappeared! What happened? How did it turn out and what's happening now? When it's Jared's turn, he announces a new side hustle: an email newsletter. He talks about how he plans to build it, why he decided to dive into this project, and his experience in this area. When it comes to Weird Niche Sites, Spencer reveals Pets or Food, a very entertaining tongue-in-cheek website that does get some traffic, according to Ahrefs. It doesn't have a lot of ads but it does have merch, which may be a source of revenue. Jared then reveals Line Rusher, which allows people to hire someone to wait in line for them at places like the DMV, Apple, and at restaurants and amusement parks. They offer some interesting services and are constantly hiring line sitters, which may be an interesting option in the current gig economy. What do the stats in Ahrefs reveal? What spin would Jared put on this service? And that brings us to the end of another episode! We hope you learned from the latest headlines in the space, felt inspired by our side hustle projects, and got ideas from this week's weird niche sites. Ready to join a niche publishing mastermind, and here from industry experts each week? Join the Niche Pursuits Community here: https://community.nichepursuits.com Be sure to get more content like this in the Niche Pursuits Newsletter Right Here: https://www.nichepursuits.com/newsletter Get SEO Consulting from the Niche Pursuits Podcast Host, Jared Bauman: https://www.nichepursuits.com/201creative
Fantastic opportunity to earn consistent income. Steve Sipress, entrepreneur, marketing, advertising, sales, tips, ideas, help, wow, strategy, small business owner, direct response, tactics, success, profits, growth, results, marketing consultant, Google, Adsense, ads, advertising, CPM, impressions, clicks, website, publisher, income, revenue, share,
Tod is on holiday until Feb 29, so in this episode, we're bringing you a sample episode of the new podcast from our Google Ads correspondent, Jyll Saskin Gales.
Menjalankan bisnis online dengan menggunakan program periklanan seperti Google AdSense adalah halal selama konten yang disajikan sesuai dengan prinsip-prinsip syariah Islam. Rasulullah shallallahu 'alaihi wa sallam bersabda, "Sesungguhnya Allah itu baik dan tidak menerima kecuali yang baik." (HR Muslim). Oleh karena itu, sebagai umat Muslim, kita harus memastikan bahwa konten yang dihasilkan tidak mengandung hal-hal yang haram seperti judi, alkohol, atau pornografi. Dengan menjaga kesucian konten dan mengikuti pedoman agama dalam berbisnis online, maka penggunaan AdSense dapat menjadi sumber rezeki yang halal dan barokah.
Navigating the process of obtaining a Certificate of Residence from HMRC has become tricky. Specifically, UK residents managing foreign income or gains often require this certificate to validate their tax residency in the UK. This requirement extends to a diverse range of entities, including individuals, companies, partnerships, trusts, and various other structures. In the following sections, we outline the step-by-step procedures we found online for different entities, starting with individuals who are advised to complete an online form under the designated section, unless an alternative form from another state is applicable, in which case it should be sent to the provided address in the same section. www.Zulftalks.com
The Latent Space crew will be at NeurIPS on Tuesday! Reach out with any parties and papers of interest. We have also been incubating a smol daily AI Newsletter and Latent Space University is making progress.Good open models like Llama 2 and Mistral 7B (which has just released an 8x7B MoE model) have enabled their own sub-industry of finetuned variants for a myriad of reasons:* Ownership & Control - you take responsibility for serving the models* Privacy - not having to send data to a third party vendor* Customization - Improving some attribute (censorship, multiturn chat and chain of thought, roleplaying) or benchmark performance (without cheating)Related to improving benchmark performance is the ability to use smaller (7B, 13B) models, by matching the performance of larger models, which have both cost and inference latency benefits.Core to all this work is finetuning, and the emergent finetuning library of choice has been Wing Lian's Axolotl.AxolotlAxolotl is an LLM fine-tuner supporting SotA techniques and optimizations for a variety of common model architectures:It is used by many of the leading open source models:* Teknium: OpenHermes, Trismigestus, CollectiveCognition* OpenOrca: Mistral-OpenOrca, Mistral-SlimOrca* Nous Research: Puffin, Capybara, NousHermes* Pygmalion: Mythalion, Pygmalion* Eric Hartford: Dolphin, Samantha* DiscoResearch: DiscoLM 120B & 70B* OpenAccess AI Collective: Manticore, Minotaur, Jackalope, HippogriffAs finetuning is very formatting dependent, it also provides prompt interfaces and formatters between a range of popular model formats from Stanford's Alpaca and Steven Tey's ShareGPT (which led to Vicuna) to the more NSFW Pygmalion community.Nous Research MeetupWe last talked about Nous at the DevDay Recap at the e/acc “banger rave”. We met Wing at the Nous Research meetup at the a16z offices in San Francisco, where they officially announced their company and future plans:Including Nous Forge:Show NotesWe've already covered the nuances of Dataset Contamination and the problems with “Open Source” in AI, so we won't rehash those topics here but do read/listen to those if you missed it.* Axolotl GitHub and Discord* The Flan paper and dataset* StackLlama model and blogpost* Multipack paper* Our episode with Tri Dao* Mamba state space models - Tri Dao and Albert GuTimestamps* [00:00:00] Introducing Wing* [00:02:34] SF Open Source AI Meetup* [00:04:09] What is Axolotl?* [00:08:01] What is finetuning?* [00:08:52] Open Source Model Zoo* [00:10:53] Benchmarks and Contamination* [00:14:29] The Case for Open Source AI* [00:17:34] Orca and OpenOrca* [00:23:36] DiscoLM and Model Stacking* [00:25:07] Datasets and Evals over Models* [00:29:15] Distilling from GPT4* [00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ* [00:41:55] Axolotl vs HF Transformers* [00:48:00] 20x efficiency with StackLlama and Multipack* [00:54:47] Tri Dao and Mamba* [00:59:08] Roadmap for Axolotl* [01:01:20] The Open Source AI CommunityTranscript[00:00:00] Introducing Wing Lian[00:00:00] [00:00:00] swyx: Welcome to Latent Space, a special edition with Wing Lien, but also with our new guest host, Alex. Hello, hello. Welcome, welcome. Again, needs no introduction. I think it's like your sixth time on Latent Space already. I think so, yeah. And welcome, Wing. We just met, but you've been very prolific online. Thanks for having me.[00:00:30] Yeah. So you are in town. You're not local. You're in town. You're from Minneapolis?[00:00:35] Wing Lian: Annapolis. Annapolis. It's funny because a lot of people think it's Indianapolis. It's I've got Minneapolis, but I used to live out at least in the San Francisco Bay Area years ago from like 2008 to 2014. So it's fairly familiar here.[00:00:50] swyx: Yep. You're the maintainer of Axolotl now, which we'll get into. You're very, very prolific in the open source AI community, and you're also the founder of the Open Access AI Collective. Yeah. Cool. Awesome. Maybe we can go over a little bit of your backgrounds into tech and then coming into AI, and then we'll cover what[00:01:06] Wing Lian: happens and why you're here.[00:01:08] Yeah. So. Back on tech, so I started years ago, I started way back when I was scraping, Apartment websites for listings and then, and then building like SEO optimized pages and then just throwing Google AdSense on it.[00:01:24] And that got me through like college basically. Is[00:01:27] swyx: that decent money? And what year[00:01:28] Wing Lian: was this? Like 2004, 2005. Yeah, that's decent money. It's like thousand bucks a month. But as a college student, that's like. Gravy. Really good money, right? So, and then there's just too much competition It's just sort of like died off. I was writing stuff in like Perl back then using like like who nobody hosted anything on Perl anymore, right? Still did a little bit more like computer tech support and then software, and web more professionally.[00:01:54] So I spent some time working on applications in the blood industry. I came out to San Francisco for, I was at SGN, so Social Gaming Network, as a startup. They started doing, with Facebook apps, and then they pivoted into doing mobile apps. And then, from there, I spent time.[00:02:14] I've quite a few more startups since then and in the last few years I've been in the music space So like I was at United Masters for a while and then past year I've been at SoundCloud, but not doing that anymore and now that I have a lot more time It's just like all right.[00:02:30] We're going full bore on axolotl and we're gonna we're gonna crush AI So yeah,[00:02:34] SF Open Source AI Meetup[00:02:34] swyx: totally you so you're here in town for the open source. Yeah, I meet up that we had yesterday Yep, yeah, that was amazing. Yeah, it was a big collection. Olama, Noose Research, Alignment Lab, Anyone else that I missed? I mean, Jeremy Howard is his own thing.[00:02:47] Yeah.[00:02:49] And Alex, you're also there. You love to bring SF to the world. Your takes?[00:02:55] Alex Volkov: It's incredible that we recorded a Thursday Eye episode after that one. And LDJ, who's usually co hosts Thursday Eye, just like briefly mentioned, Oh yeah, I talked about it.[00:03:04] Like, I saw Karpathy, and then I talked to Jeremy Howard, and the guy from Mistral came in, and it's like, He's talking about all these, titans of industry, basically, that outside of SF, You just don't meet casually hanging out in the same space. You can't, pull somebody. He ran into the Laylow from Mistral, he ran into him while, drinking water.[00:03:20] He didn't even know he was there. It's just, that type of stuff is really hard to find outside of SF. So, absolutely, absolutely great. And also, presentations from Alignment Labs, presentations from News Research, news issues, talked about. Forge, and some of[00:03:33] swyx: the other stuff they announced. We can say now they're officially a company.[00:03:36] I met Technium.[00:03:37] He[00:03:37] Alex Volkov: came over here. He didn't want to get recorded. But maybe.[00:03:41] Wing Lian: We'll wear him down at some point. Yeah, I'm excited for Forge. They've positioned it as this agentic sort of framework where it's just Drag and drop things and, fill in text with where you want to inject different variables and it opens up all of these potentials for data pipelines now, right?[00:03:56] And using your own local LLMs and not relying on GPT 4 or anything like that. Yeah, yeah,[00:04:02] swyx: good stuff. Okay, so let's maybe go into the Axolotl origin story and then we have, we have some intro or background.[00:04:09] What is Axolotl?[00:04:09] swyx: To do on like the open source model universe and also on fine tuning, but maybe just, since you're talking about your personal journey, what was your personal journey into[00:04:18] Wing Lian: axolotl?[00:04:19] Yeah, so my personal journey started like back in mid March, completely unrelated to AI and axolotl. And it really started, I fell while skiing, I torqued. Great 3 MCL sprain and being sort of like an active person that can no longer be active because the two, couldn't play soccer, because that is requires to have having knees until I, it's healed.[00:04:42] So I. I decided I needed to find something to do to take up my free time. And that became, well, let's learn how to train in, these language models. It was everywhere. So I was like, all right, I'm just going to sit down, learn. I think I used like other, I think I was using like Alpacalora.[00:05:00] Cause I think the Alpaca paper had just came out, come out then. So I was like using Alpacalora repo and sort of like learning how to use like. None of us were like GPU rich back then, and none of us, most of us still we're still all GPU poor, but I was doing what was it, like 4 bit, Alpaca Lord, there was like a 4 bit version where we were doing quant, or 8, no, 8 bit quantizations, and then I think they had released QLOR a little bit later, and I think right when, before QLOR came out, I was already starting to do fine tunes, but having this need to sort of like mix data sets together, and If you've ever looked at all the various different datasets available on HuggingFace, they all have various different prompt formats, and, it's sort of a nightmare, and then I think the other piece is if you've ever tried to fine tune, at least Back then probably the ecosystem's a little better now.[00:05:54] Everybody required that you say, alright, you put your hyperparameters as command line arguments. And so it's always like, well, I now have to go copy and paste my previous thing and to change things out. And I really wanted it. to be in a YAML file because it was more portable and reproducible.[00:06:09] So I was doing that and then the QLOR paper came out. Tim Dettmer announced that and then somebody looked it up for me yesterday and it's like between that announcement it took us seven days to get that integrated into Axolotl, right? Which is like, it's not. I wouldn't say it's really fast, but in a manner that, is in a, a reusable framework, I think it was quite the accomplishment then.[00:06:33] And so we started, picking up traction with people there. And then it's just been building models, and then just iterating what my needs are. So, yeah. Excellent. Yeah. I[00:06:44] Alex Volkov: want to ask, for folks who are listening who never heard of Axolotl, now do you describe how you got there?[00:06:49] Can you, how do you summarize this for folks who maybe haven't fine tuned anything. They know about open source LLM exists, they maybe know like LLAML, what's XLR for somebody who doesn't know. I've never heard of a data set curation[00:07:01] Wing Lian: creation before. We sort of have to take a step back and understand that, when you've got these language models, you have what I think most people refer to as like base models, also known as like foundational models, right?[00:07:15] Where some benefactor, whether it's Meta or Mistral or whoever, has gone and spent all this money. To train these models on huge corpuses of text, right? And these, these corpuses, they're generally good across lots of different things, but they're really good at just saying, talking on and on and on, but they're not good at, following instructions or having chats or anything like that.[00:07:40] So, when you think about fine tuning, it's like Saying, all right, we have this really sort of good generalized, text completion thing, and I want to turn it into something that I can talk to or have, follow instructions. So, I think fine tuning is probably best defined in like that.[00:07:58] swyx: Okay, got it.[00:07:59] And we actually[00:08:01] What is finetuning?[00:08:01] swyx: Do want to make sure that we have like an overall introduction to fine tuning for people because again like trying to make sure that we bring everyone along in this, in this journey. We already went into Loras and QLoras without explaining what[00:08:12] Wing Lian: they are. Oh yes, yes, sorry.[00:08:14] swyx: And so I will put things in my words and you can correct me as, as, as my I'll be the village idiot here.[00:08:21] So, so fine tuning is basically sort of grabbing an open source model off the shelf, and then basically doing further training on it with a custom dataset of your own. Primarily, people use it, think about it as fine tuning for JSON output, or fine tuning for a style of response. Let's say you wanted to tell jokes, or be funny, or be short, or whatever.[00:08:43] Just the open source AI community has really fine tuned in all sorts of different manner. I think we'll go over those those things now. Let's go over those things now, and then we'll talk about fine tuning methods.[00:08:52] Open Source Model Zoo[00:08:52] swyx: So there's a universe of people who fine tune stuff. Yesterday in your slides, you had, I'll just list some of these and then we'll maybe go through some of them, right?[00:08:59] So Technium is personally leading Open Hermes, which is I think the sort of premier model out of the news. news community. There's OpenOrca, which you had a hand in. News, the news research itself also has Capybara and Puffin and all the others. There's Pygmalion, which I've never messed with.[00:09:14] Eric Hartford, I am aware of his Uncensored Models and his Samantha Models. Disco Research with Disco LM. And then you personally have done Manticore, Minotaur, Jackalope, and Hippogriff. What should people know about all these names? Being part of AI Twitter is seeing all these things and going dude, I'm being DDoS'ed by all these things and I don't know how different they are.[00:09:32] What should people know? Yeah, so[00:09:34] Wing Lian: I think on a lot of these models, generally, we like to think of those as sort of general models, so If you think about it, what is GPT 4, what is Chad GPT? It's a good general model, and then, One of the services I think that OpenAI offers is like these fine tunings where you're a business and you have very specific business use cases and you might fine tune for that use case.[00:10:00] All of these models are really just general use case that you can then go and maybe Fine tune another lore over it for your use cases, but they tend to be good. With good being relative, it's open source. Open source AI is still sort of is infancy. So, good is, it's pretty reasonable.[00:10:18] It's probably still better than most, high schoolers at answering questions and being able to like figure things out and, and reasoning skills and math and those sorts of things, right?[00:10:27] swyx: And also as measured on the Hugging[00:10:29] Wing Lian: Face leaderboard. Yes, well, that's like a whole other discussion, right, there's a whole other, group of people who, and I, I mostly agree with them that, benchmarks can be, are pretty bogus these days, LM says, I think they published something recently where, even if you think the dataset's not contaminated, you can go and, find contamination And maybe we should step back and say what contamination is, right?[00:10:53] Benchmarks and Contamination[00:10:53] Wing Lian: So we have all of these data, when you go and do these benchmarks, there's a specific data set where there are these questions and usually it's multiple choice. And what can happen is, well, sometimes someone It puts the question, maybe maliciously, maybe accidentally, into the training dataset, and now the, the, your model knows how to answer the test questions really well, but it doesn't, it hasn't generalized the ability to actually do that[00:11:20] Alex Volkov: right.[00:11:21] We've seen some folks competitively announce models that are like the best at that leaderboard, but then it's, it's quite obvious that, In open source? Yeah, and in that leaderboard, for Hugging Face specific, I don't know if LMCs, if that had suffered, but we, there's been some models that seem to have been competitively trained and some leakage happened into their,[00:11:41] swyx: like, supposal.[00:11:43] I understand, once there's been a credible assertion, Hugging Face actually does take them down, right? Yeah, yeah,[00:11:48] Alex Volkov: which is really hard to know, right?[00:11:50] swyx: It's really hard to know, sometimes it's like a pure accident,[00:11:52] Alex Volkov: it's oh, oops. You're going through a mixer. I think, a responsible So acknowledgement, that this kind of happened to you is also important.[00:11:58] I saw LDJ from news research can acknowledge that. Because many of these datasets are collections of other datasets. There's a bunch of people are baking, basically. It's alchemy. Right. And so sometimes you don't know. Sometimes you pull an open source dataset and they announce, oh, you know what, actually, the MMLU benchmark which we used to Specifically identify models that did go into this data set, that then went into that data set.[00:12:22] So sometimes it's actually an accident and folks take it down. But I've seen some competitive folks who want to put their name out there because people are starting to notice which is the top[00:12:30] swyx: model. For those who want a fun take on this so the file one dataset. FindOne model from Microsoft was accused of being contaminated.[00:12:37] And I saw this joke paper that was fantastic. It was called, training on the test set is all you need. It's a super small model that just memorizes everything. It was fantastic. So yeah, contamination, I think we've actually covered it in a previous episode before. So we're good. But again, I want to give people a map into the open source AI model, the universe.[00:12:57] And Alex, you can also jump in here because you guys have spent a lot more time with them than I have. So, what should people know about Technium? What should people know about Noose? And then we can go down the list. Yeah,[00:13:05] Wing Lian: I think so. I think if we start with, Technium. When you talk to him, he's gonna say, I think, I think his response is that he wants to build GP4 on his laptop, right?[00:13:14] So, very, very good at building general models. I think with Noose, Noose Research, they're looking at more, sort of, More, more research focused things, like their Yarn models, I don't, I don't, they didn't actually train their, they have their own trainer for their Yarn models, but So they did not use Xlato for that one?[00:13:30] They didn't use that, but like Is that, you don't have support for it? I think we do support Yarn, I think, I'd have to double check that answer. Yeah, I'm just kind of curious what you can and cannot support, and Yeah, I mean, Yarn is supportable, it's basically, I think it's just replacing, I think, the rope part of that, so Yeah, not, not a big deal.[00:13:48] Yeah, it's not a big deal, it's just I haven't gotten to it, not enough people have asked, I think a lot of people have asked for other things, so it's just, squeaky wheel, right? I think at the end of the day, people are like building these data sets and I think if you sort of map things chronologically, these make more sense because it's like, how do we incrementally improve all of these models?[00:14:07] So a lot of these models are just incremental improvements over the last thing, right? Whether it is sort of through methods of how do we, how did we curate the data set? How did we improve the quality of the data set? So, you maybe LDJ talked about it right on I think for, for Capybara and Puffin, like how those, those were very specific dataset curation techniques that he works on.[00:14:29] The Case for Open Source AI[00:14:29] Alex Volkov: So there's, folks are doing this for dataset curation. Folks are doing this for skillset building as well. Definitely people understand that open source is like very important, especially after the, the, the, the, the march, the debacle, the OpenAI weekend that we all had. And people started noticing that even after developer day in OpenAI, the APIs went out.[00:14:48] And then after that, the whole leadership of the company is swiftly changed and people, there was worries about, you know. How can people continue building AI products based on these like shaky grounds that turned attention definitely to Technium at least in open RMS I started seeing this more and more on Twitter, but also other models and many companies They're gonna start with open AI just to get there quick, and then they they think about okay Maybe I don't want to share my knowledge.[00:15:13] Maybe I don't want to sign up for Microsoft. Maybe they will change their terms and conditions so What else is out there? They turned to other companies. Up until yesterday, Google was nowhere to be found. We've talked about Gemini a little bit before in a previous And you can tune in[00:15:26] swyx: to[00:15:26] Alex Volkov: Thursday Eye.[00:15:26] Yeah, you can tune in to Thursday Eye. We covered the Gemini release a little bit. And but many are turning into the open source community and seeing that Meta released and continues to release and commit to open source AI. Mistral came out and the model is way smaller than LLAMA and performs Significantly better.[00:15:43] People play with OpenRMS, which is currently techniums based, news researched, sourced, axolotl trained OpenRMS, I assume, right? And then they play with this and they see that, okay, this is like GPT 3. 5 quality. We had GPT 4. 5 birthday just a week ago. A week ago, a year ago, a week ago, we never, interacted with these models of this caliber.[00:16:04] And now there's one open source, one that's on my laptop, completely offline, that, I can continue improving for my use cases. So enterprises, companies are also noticing this. And the open source community folks are building the skill set, not only the data sets. They're building the actual kind of, here's how we're going to do this, with Axelotl, with these data sets.[00:16:21] The curation pieces. Now. Interesting. There's like recipes of curation. The actual model training is kind of a competitive thing where people go and compete on these leaderboards that we talked about, the LMC arena, and that recently added open air and recently added open chat and a bunch of other stuff that are super cool.[00:16:37] The hug and face open source leaderboard. And so there's a competitive aspect to this. There's the open source. Aspect to this, like Technium says, I want GPT 4 on my laptop. There's the, let me build a skill set that potentially turns into a company, like we saw with Noose. Noose just, started organizing, a bunch of people on Discord, and suddenly, they're announcing their company.[00:16:54] It's happening across all these modalities, and suddenly all these people who saw these green pastures and a fairly quick way to, hey, here's a cool online community I can, start doing cool stuff with. You mentioned the same in the beginning, right? Like, after your accident, what's cool, let me try this out.[00:17:08] Suddenly I start noticing that there's a significant movement of interest in enterprising companies into these areas. And, this skill set, these data sets, and this community is now very Very important, important enough to create an event which pulls in Andrei Karpathy from OpenAI to come and see what's new Jeremy Howard, like the event that we just talked about, people are flying over and this is just a meetup.[00:17:28] So, definitely, the community is buzzing right now and I think Axelot is a big piece as well.[00:17:34] Orca and OpenOrca[00:17:34] Wing Lian: Cool. Maybe we can talk about like Orca real quick, Orca, OpenOrca rather, I think there was a lot of buzz when, the first Orca paper came out. And just briefly, what is Orca? Yeah, Orca was basically having traces of like chain of thought reasoning, right?[00:17:48] So they go and they, they distill sort of GPT 4. They take, they take a sampling of data from the Flan dataset. Maybe we can like add some show notes in the Flan dataset. Yeah, but we've covered it. Okay, cool. Use GPT 4 to say, all right, explain this in a step by step reasoning, right?[00:18:06] And then you take that and you, they train the model and it showed, very good improvements across a lot of benchmarks. So OpenOrca was sort of the open reproduction of that since Microsoft Research never released that particular data set. And going back to sort of the Hugging Face leaderboard thing, those models did really well.[00:18:23] And then I think, so sort of the follow up to that was SlimOrca, right? I think Going into and building the OpenOrca dataset, we never really went in and, validated the actual answers that GPT 4 gave us, so what we did was one from OpenChat actually cross referenced the original Flan, the original Flan response, the human responses, the correct answers with the dataset, and then I went and took it and sent all of, both of them to GPT 4 and said, is this answer mostly correct, right?[00:18:54] Yeah. And then we were able to filter the dataset from, At least of the GPT 4 only answers from like 800, 000 to like 500, 000 answers or rows and then, and then retrain the model and it had the same performance as the original model to within I think, 0. 1 percent here about, and 30 percent less data.[00:19:13] So, yeah. Okay.[00:19:15] swyx: Interesting. So, I mean, there's, there's so much there that I want to highlight, but yeah. Orca is interesting. I do want people to know about it. Putting chain of thought into the data set like it's just makes a ton of sense one thing I think it would be helpful for people to scope thing these things out is how much data are we talking about when when you When people are fine tuning and then how much time or resources or money does it take to train to fine[00:19:36] Wing Lian: tune?[00:19:37] Yeah, so I think there's a little bit of overlap there with sort of like fine tuning techniques, but let's say Orca and I think even Hermes, they're both relatively large data sets like 10 billion tokens. Yeah. So large data sets being or the original Orca was, or the original open Orca was 800,000 rows.[00:19:55] I believe it was somewhere in the ballpark of like a gigabyte of data, of gigabyte, of text data. And I, I don't. I believe, Hermes was, is like a quarter million rows of data, I don't know the actual byte size on that particular one. So, going and training a, let's, let's say everybody's training 7 billion Mistral right now, right?[00:20:15] So, to tri I, I believe to fine tune 7 billion Mistral on, let's say, 8 A6000s, which have 48 gigabytes of VRAM, I believe, It takes about 40 hours, so 40, and then that's, depending on where you get your compute, 40 times 6, so it's like 500 to fine tune that model, so, and, and that's assuming you get it right the first time, right?[00:20:44] So, you know.[00:20:45] swyx: Is, is that something that X. Lotto handles, like, getting it right the first[00:20:48] Wing Lian: time? If you talk to anybody, it's like you've probably tried at least three or four runs or experiments to like find the right hyperparameters. And after a while you sort of have a feel for like which, where you need your hyperparameters to be.[00:21:04] Usually you might do like a partial training run, do some benchmark. So I guess for Al Farouk, whether you're going by his. This is Jeremy, he's, his actual name, or his twitter handle. He released the Dharma dataset, which is basically a subset of all the benchmarks. And Axolotl actually supports, you know taking that subset and then just running many benchmarks across your model every time you're doing an evaluation so you can sort of like see sort of relative it's not going to be the actual benchmark score, but you can get ideas alright, is this benchmark improving, is this benchmark decreasing, based on, you know Wait,[00:21:39] swyx: why don't you run the full benchmark?[00:21:41] What, what, what The[00:21:42] Wing Lian: full benchmarks take Take a long time. Significant, yeah, significant amount of time. Yeah. And Okay, so that's like[00:21:48] swyx: mini MMLU. Yeah. Like,[00:21:49] Wing Lian: mini BigBench or whatever. Yep, exactly.[00:21:51] Alex Volkov: It's really cool. We, when I joined Web2Masters just recently, and one of the things that I try to do is hey I'm not, I'm a software engineer by trade, I don't have an MLE background, But I joined a company that does primarily MLE, and I wanted to learn from the community, Because a lot of the open source community, they use weights and biases, And the benchmark that you said that Pharrell did, remind me of the name, sorry.[00:22:13] Dharma? Dharma, yeah, yeah. So Luigi showed me how Dharma shows inside the dashboard. In Wi and Biases dashboard and so you can actually kinda see the trending run and then you can see per each kind of iteration or, or epoch or you can see the model improving trending so you can on top of everything else.[00:22:29] The wi and biases gives like hyper parameter tracking, which like you, you started with common line and that's really hard to like remember. Also the Dharma data set, like the quick, the mini orca mini, you mini many different things. It's pretty cool to like visualize them as well. And I, I heard that he's working on a new version of, of Dharma, so Dharma 2, et cetera.[00:22:47] So hopefully, hopefully we'll see that soon, but definitely it's hard, right? You start this training around, it said like 40, 50 hours. Sometimes, sometimes it's like your SSHing into this machine. You, you start a process, you send it with God and you just go about your day, collecting data sets, and then you have to return.[00:23:04] And the whole process of instrumentation of this is still a little bit like squeaky but definitely. Tuning performance, or like grabbing performance in the middle of this, like with Dharma and some other tools, is very helpful to know that you're not wasting precious resources going somewhere you shouldn't go.[00:23:21] Yeah.[00:23:22] swyx: Yeah. Very cool. Maybe I'll, I'll, before we go into like sort of more details on fine tuning stuff, I just wanted to round out the rest of the Excel autoverse. There's, there's still Eric Hartford stuff. I don't know if you want to talk about Pygmalion, Disco, anything that you know about[00:23:35] Wing Lian: those, those things.[00:23:36] DiscoLM and Model Stacking[00:23:36] Wing Lian: Yeah, I think like one of the, definitely one of the more interesting ones was like the Disco 120b, right? Yeah, I know nothing about it. Yeah. So, so. Alpen from Pygmalion AI, right, so they, so Pygmalion is a sort of a, it's, it's, they have their own community, a lot of it is based around, roleplay models, those sorts of things, and Alpen, like, put together, merged together Llama270B, so, and Alpen, like, put together, merged together Llama270B, so, I don't remember how he stacked them together, whether he merged the layers in between. There's a whole, there's a whole toolkit for that by Charles Goddard, where you can like take a single model and like stack them together or multiple models merge.[00:24:18] That's like a whole other talk and a whole other tool set, but was able to create this 120. Billion parameter model out of a LAMA two 70 B. And then I believe the, yeah, disco is a fine tune of, of the, the, the sort of the base one 20 B is, I believe Goliath one 20 B. So, and, and what are the[00:24:37] swyx: headline results that people should know about[00:24:39] Wing Lian: disco?[00:24:39] I think for the headline results, I, I've, I haven't played with it personally because it's. It's a very large model and there's a lot of GPU, right? But, like, from what I've heard anecdotally, it performs really well. The responses are very good. Even with, like, just, even the base model is a lot better than, Llama70b.[00:24:57] So, and we, I think generally everybody's like, we would all love to fine tune Llama70b, but it's just, it's so much, it's so much memory, so much compute, right?[00:25:07] Datasets and Evals over Models[00:25:07] Wing Lian: I[00:25:07] Alex Volkov: want to touch on this point because the interesting thing That comes up out of being in this ecosphere and being friends with open source folks, tracking week to week state of the art performance on different models.[00:25:19] First of all, a lot of the stuff that the folks do a couple of weeks ago, and then something like Mistral comes out, and a lot of the stuff back then, Doesn't technically make sense anymore. Like the artifacts of that work, the actual artifacts, they don't no longer make sense. They're like lower on the on, on the hug and face leaderboard or lower on LM CS leaderboard.[00:25:36] But some of the techniques that people use, definitely the datasets. The datasets keep traveling, right? So open airmen, for example, is the dataset. The tum cleaned up for only. Open sourceable data that previously was just Hermes. And that, it was previously used to train Lama. And then once Mistral came out, it was used to train Mistral.[00:25:54] And then it became significantly better on the 7b base Mistral. So the data sets keep traveling, keep getting better a little bit here and there. And so the techniques improve as well. It looks like both things are simultaneously true. The artifacts of a month and a half ago. The, the actual models themselves, it's great the hug and face has them, because not every company can keep up with the next weeks', oh, I, I'll install this model instead, sell this model instead.[00:26:19] But the, the techniques and the, the dataset keep improving as we go further, and I think that's really cool. However, the outcome of this is that for a long time. For many, many people, including us, that we do this every week. We literally talk with people who release these models every week. It's really hard to know.[00:26:36] So, there's a few aspects of this. One, I think, like you said, the bigger model, the 70B models, you actually have to have somebody like Perplexity, for example, giving you access to the 70B really fast. Or you have to, like, Actually, find some compute, and it's expensive, especially for the bigger models. For example Falcon 180B came out, like the hugest open source model.[00:26:56] How do you evaluate this if you can't run it? Nobody liked it. It's really, so first of all, nobody liked it, but secondly, only the people who were able to find compute enough to run inference on this, they only had like, I can't run this on my laptop, and so that's why it's much easier, something like OpenRMS 7 to be, 7B, it's much easier, because you can run this on your MacBook.[00:27:14] It's much easier to evaluate. It's much easier to figure out the vibes, right? Everybody talks about the vibes as an evaluation check. If you're plugged in enough, if you follow the right people, if they say pretty much the same things all independently, then you run into a problem of whether they're repeating, and their stochastic parents are repeating the same thing, or they actually evaluated themselves.[00:27:31] Yeah, you never know. But, you never know, but like, I think on a large enough scale on Twitter, you start getting the feel. And we all know that like, OpenRMS is one of the top performing models, benchmarks, but also vibes. And I just wanted to highlight this vibes checks thing because you can have the benchmarks, you can have the evaluations, they potentially have contamination in them, potentially they not necessarily tell you the whole story because some models are good on benchmarks, but then you talk to them, they're not super helpful.[00:28:00] And I think it's a combination of the benchmarks, the leaderboards, the chatbot, because LMSys, remember, their ranking is not only based on benchmarks, it's also people playing with their arena stuff. People actually like humans, like, get two answers. I think they completely ignore benchmarks. Yeah, and then They only do ELO.[00:28:18] Oh, they do ELO completely, right? So that, for example, is just like people playing with both models and say, Hey, I prefer this one, I prefer that one. But also there's like some selection bias. The type of people who will go to LMCs to play with the models, they're a little bit specific in terms of like who they are.[00:28:33] It's very interesting. There's so many models. People are doing this in this way, that way. Some people are doing this for academic rigor only to test out new ideas. Some people are actually doing this like the Intel fine tunes of Mistral. Intel wanted to come out and show that their hardware approach is possible, Mistral, etc.[00:28:51] And it's really hard to know, like, what to pick, what to use. And especially on the bigger models, like you said, like the Llama 70B, the Falcon 180B. It's really because, like, who has the compute to validate those? So I would mention that, like, use with caution. Like, go and research and see if the biggest model that just released was actually worth the tokens and the money you spend on it.[00:29:12] To try and, if you're a business, to integrate it.[00:29:15] Distilling from GPT4[00:29:15] swyx: Since you said use of caution, I'll bring in one issue that has always been in the back of my mind whenever I look at the entire universe of open source AI models, which is that 95 percent of the data is derived from GPC 4, correct?[00:29:30] Which technically you can't use for commercial licenses,[00:29:34] Wing Lian: right?[00:29:35] swyx: What is the community's stance on this kind of stuff?[00:29:40] Wing Lian: I think from the community stance, like I feel like a lot of us are just experimenting, so for us, it's like, we're not going and building a product that we're trying to sell, right?[00:29:49] We're just building a product because we think it's interesting and we want to use it in our day to day lives, whether or not we try and integrate it. Personal use, yeah. Yeah, personal use, so like, as long as we're not selling it, yeah, it's fine. But[00:30:01] swyx: like, I as a company cannot just take OpenHermes and start serving[00:30:05] Alex Volkov: it and make money on it.[00:30:06] OpenHermes you can. Because the opening of OpenHermes, I think, is a clean up. That did after the regular Hermes, please folks, check your licenses before you listen to podcasts and say, Hey, I will tell you though, you could say the same thing about OpenAI. You could say the same thing kind of makes sense, where OpenAI or StabilityAI trains their diffusion model on a bunch of pictures on the internet, and then the court kind of doesn't strike down Sarah Silverman, I think, or somebody else, who came and said, hey, this has my work in it, because of the way how it processes, and the model eventually builds this knowledge into the model, and then it doesn't actually reproduce one to one what happened in the dataset.[00:30:45] You could claim the same thing for open source. Like, we're using And by we, I mean the, the open source community that I like happily report on uses GPT 4 to rank, for example, which is the better answer you, you, that's how you build one, one type of data set, right? Or DPO or something like this, you, you basically generate data set of like a question and four answers, for example, and then you go to GPT 4 and say, Hey, smartest model in the world right now, up to Gemini Ultra, that we should mention as well.[00:31:11] Which one of those choices is better? But the choices themselves are not necessarily written with GPT 4. Some of them may be, so there's like full syntactic datasets. But there's also, datasets are just ranked with GPT 4. But they're actually generated with a sillier model, or like the less important model.[00:31:25] The lines are very blurry as to what type of stuff is possible or not possible. And again, when you use this model that's up on Hug Face, the license says you can use this. OpenAI is not going to come after you, the user. If anything, OpenAI will try to say, hey, let's prevent this, this type of thing happening, and the brain, but I honestly don't think that they could know even, not that it makes it okay, it's just like, They also kind of do this with the Internet's archive, and also, I think that some of it is for use.[00:31:55] You use models to help you augment tasks, which is what GPT 4 lets you do.[00:32:00] swyx: Yeah, the worst thing that OpenAI can do is just kick you off OpenAI. That's because it's only enforced in the terms of service.[00:32:05] Alex Volkov: Sure, but just like to make sure, to clarify who they're going to kick out, they could kick out like News, for example, if news are abusing their service, a user of the open source, fully Apache 2 open source, for example, They won't get kicked out if they use both, just because they use both.[00:32:22] I don't believe so. I don't think OpenAI has a claim for that.[00:32:25] swyx: Well, we're not lawyers, but I just want to mention it for people to know it's an issue.[00:32:30] Wing Lian: And one of the things, like, I talked to someone recently, and I think that they also are like interested in it, but also to the point of like, right, if I use a model trained on data, using GPT for data, But I use that model to then regenerate new data.[00:32:46] Is that model, is that data okay? So like you start going down this whole rabbit hole. So yeah. All right.[00:32:53] swyx: Fantastic. Cool. Well, I think that's roughly highlights most of the open source universe. You also have your own models. Do you want to shout out any one of them? Yeah.[00:33:01] Wing Lian: I mean, I think like, I think Early on, Manicore got a lot of love.[00:33:04] I think it was mostly popular in, like, the roleplay communities. It was, it tended to be pretty truthful. It tended to be, like, have relatively good answers, depending on who you ask, right? But, I think for me, it was just, Releasing models was a way to try and, like, continue to build out the product, figure out what I needed to put into the product, how do I make it faster, and, if you've got to, like, go and debug your product, you may as well have it do something useful.[00:33:29] Awesome. So, yeah.[00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ[00:33:31] swyx: Okay, and then maybe we'll talk about just fine tuning techniques. So this is going to be a little bit more technical than just talking about model names and datasets. So we started off talking about LoRa, QLoRa. I just learned from your readme there's ReLoRa. Which I've never heard about.[00:33:45] Could you maybe talk about, like, just parameter efficient fine tuning that whole, that[00:33:50] Wing Lian: whole journey, like, what people should know. Yeah, so with parameter efficient fine tuning, I think the popular ones, again, being, let's, we'll start with lore, right? So, usually what you do is you freeze all the layers on your base, on the base model, and then you, at the same time, you sort of introduce additional Oh, this is tight.[00:34:08] No. You introduce, another set of layers over it, and then you train those, and it is done in a way that is mathematically possible, particularly with LORs that you can, then you, you, When you, when you train the model, you, you run your inputs through the base model, whose weights are frozen, but you, then you also run it through the additional weights, and then at the end you combine the weights, and then, and then, or you combine the weights to get your outputs, and then at the end, and when you're done training, you're left with this other set of weights, right, that are completely independent, and And then from that, what you can do is, some person smarter than I figured out, well, oh, they've done it in such a way that now I can merge these weights back into the original model without changing the architecture of the model, right?[00:35:03] So, so, that tends to be, like, the go to, and You're training much fewer parameters so that when you do that, yes, you still need to have all of the original weights, but you have a smaller gradient, you have a smaller optimizer state, and you're just training less weights, so you can tend to train those models on, like, much smaller GPUs.[00:35:27] swyx: Yeah. And it's roughly like, what I've seen, what I've seen out there is roughly like 1 percent the number of parameters that you're trading. Yeah, that sounds about right. Which is that much cheaper. So Axelotl supports full fine tune, LoRa, QLoRa,[00:35:40] Wing Lian: Q. Yes. So, so QLoRa is, is very similar to LoRa. The paper was, if I remember correctly, the paper was Rather, traditionally, most people who did Loras were, were, they were quant, they were putting the model weights in 8 bit, and then fine tune, parameter efficient fine tuning over the Lora weights, and then with QLora, they were quantizing all of those, they were then quantizing the weights down to 4 bit, right, and then I believe they were also training on all of the linear layers in the model.[00:36:15] And then with ReLore, that was an interesting paper, and then, I think, like, it got implemented. Some people in the community tried it, tried it out, and it showed that it didn't really have the impact that the paper indicated that it would. And from what I was told recently, that they re I guess they re released something for Relora, like, a few weeks ago, and that it's possibly better.[00:36:44] I personally haven't had the time. What was the[00:36:46] swyx: main difference,[00:36:47] Wing Lian: apart from quantization? I don't know. Okay. What was the main difference, sorry?[00:36:49] swyx: Apart from quantization, right? Like,[00:36:50] Wing Lian: Qlora's thing was, like, we'll just drop off some bits. With Relora, what they did was, you would go through, you would define some number of steps that you would train, like, your Lora with, or your Qlora.[00:37:01] Like, you could do Like, ReqLore, if you really wanted to, you would, you would train your LoRa for some number of steps, And then you would merge those weights into your base model, and then you would start over. So by starting, so, then by starting over, The optimizer has to find, like, sort of, re optimize again, and find what's the best direction to move in, and then do it all again, and then merge it in, do it all again, and theoretically, according to the paper, doing ReLore, you can do parameter efficient fine tuning, but still have sort of, like, the performance gains of doing a full fine tuning, so.[00:37:38] swyx: Yeah, and[00:37:39] Wing Lian: GPTQ? And GPTQ, so it's, I think with GPTQ, it's very similar to, more similar to QLore, where you're, it's mostly a quantization of the weights down to like 4 bit, where GPTQ is a very, is a specific methodology or implementation of quantization, so. Got it.[00:37:57] Alex Volkov: Wang, for, for folks who use Axolotl, your users, some people who maybe, Want to try it out?[00:38:03] And do they need to know the differences? Do they need to know the implementation details of QLora versus ReLora? Or is it okay for them to just know that Axolotl is the place that already integrated them? And if that's true, if that's all they need to know, how do they choose which method to use? Yeah,[00:38:22] Wing Lian: so I think like, I think most people aren't going to be using ReLora.[00:38:25] I think most people are going to be using either Lora or QLora. And I think they should have it. They should have an understanding of why they might want to use one over the other. Most people will say that with Qlora, the quality of the final model is not quite as good as like if you were to do a LoRa or a full fine tune, right?[00:38:44] Just because, you've quantized these down, so your accuracy is probably a little off, and so that by the time you've done the Qlora, you're not moving the weights how you would on a full fine tune with the full parameter weights.[00:38:56] Interesting.[00:38:57] swyx: Okay, cool. For people who are more interested, obviously, read the papers. I just wanted to give people, like, a high level overview of what these things are. And you've done people a service by making it easy for people to try it out. I'm going to, I'm going to also ask a question which I know to be wrong, but I'm curious because I get asked this all the time.[00:39:15] What is the difference between all these kinds of fine tunes[00:39:17] Wing Lian: and RLHF? Okay, between all of these sorts of fine tunes and RLHF. So all of these sorts of fine tunes are based, are, ideally, this, they are taking knowledge that the base model already knows about, and presenting it in a way to the model that you're having the model answer like, Use what it already knows to sort of answer in a particular way, whether it's, you're extracting general knowledge, a particular task, right?[00:39:44] Instruct, tune, chat, those sorts of things. And then generally with RLHF, so what is, let's go back, what is it? Reinforcement Learning with Human Feedback. So if we start with the human feedback part, What you're doing is you generally have, you have like a given prompt and then you, maybe you have one, maybe you have two, I think, like if you look at with Starling, you have like up to what, seven different, seven different possible responses, and you're sort of ranking those responses on, on some sort of metric, right, whether the metric is how much I, I might like that answer versus or I think with like starling is like how how how helpful was the answer how accurate was the answer how toxic was the answer those sorts of things on some sort of scale right and then using that to go back and like sort of Take a model and nudge it in the direction of giving that feedback, to be able to answer questions based on those preferences.[00:40:42] swyx: Yeah, so you can apply, and is it commutative? Can you apply fine tuning after and onto an RLHF model? Or should the RLHF apply, come in afterwards,[00:40:54] Wing Lian: after the fine tune? Um, I, yeah, I don't know that there's There's been enough research for one way or another, like, I don't know.[00:41:02] That's a question that's been asked on Discord. Yeah, like, I definitely would say I don't know the answer. Go and try it and report back to me and let me know so I can answer for the next guy.[00:41:10] swyx: It's shocking how much is still unknown about all these things. Well, I mean, that's what research is for, right?[00:41:16] Wing Lian: So actually I, I think I saw on the top of a leaderboard, it was a, it was a mytral base model, and they didn't actually fine tune it. They, or they, they just did RLH, they did like an RLHF fine tune on it using like, I don't, I don't recall which dataset, but it was like, and it benchmarked really well.[00:41:37] But yeah, you'd have to go and look at it. But, so it is interesting, like going back to that, it's like. Traditionally, most people will fine tune the model and then do like a DPO, PPO, some sort of reinforcement learning over that, but that particular model was, it seemed like they skipped like the supervised fine tuning or Scott.[00:41:55] Axolotl vs HF Transformers[00:41:55] swyx: Cool. One thing I did also want to comment about is the overall, like, landscape, competitive landscape, I don't know. Hugging Face Transformers, I think, has a PFT module.[00:42:05] Wing Lian: Yeah, yeah, the PEFT, the Parameter Efficient Fine Tuning, yep. Is that a competitor to you? No, no, so we actually use it. We're just a wrapper over sort of, sort of the HuggingFace stuff.[00:42:15] So, so that is their own sort of module where They have, taken the responsibility or yeah, the responsibility of like where you're doing these parameter efficient fine tuning methods and just sort of like, it is in that particular package where transformers is mostly responsible for sort of like the modeling code and, and the trainer, right.[00:42:35] And then sort of, there's an integration between the two and, there's like a variety of other fine tuning packages, I think like TRL, TRLX, that's the stability AI one. Yeah, I think TRL likes the stability, yeah, Carper, and TRL is a hugging face trainer. Even that one's just another wrapper over, over the transformers library and the path library, right?[00:43:00] But what we do is we have taken sort of those, yes, we've We also use that, but we also have more validation, right? So, there are some of us who have done enough fine tunes where like, Oh, this and this just don't go together, right? But most people don't know that, so like Example?[00:43:19] Like, people want to One and one doesn't go together. I don't have an example offhand, but if you turn this knob and this knob, right? You would think, all right, maybe this will work, but you don't know until you try. And then by the time you find out it doesn't work, it's like maybe five minutes later, it's failed.[00:43:34] It's failed in the middle of training or it's failed during the evaluation step. And you're like, ah, so we've, we've added a lot of, we've added a lot more validation in it. So that like, when you've, you've created your configuration, you run it through and now you say. The validation code says this is probably not right or probably not what you don't, not what you want.[00:43:52] So are you like a, you[00:43:53] swyx: do some linting of your YAML file?[00:43:56] Wing Lian: There, I guess you could call it linting, it's sort of like Is there a set of rules out[00:44:00] swyx: there somewhere? Yeah, there's a set of rules in there. That's amazing, you should write documentation like This rule is because, this user at this time, like, ran into this bug and that's what we invested in.[00:44:10] It's like a good collection[00:44:11] Wing Lian: of knowledge. Yeah, it is, and I guess like, if you really wanted to, like, figure it out, I guess you could, like, git blame everything, and But, yeah, it's, so, I think that's always a useful thing, it's like Because people want to experiment but they don't, people will get frustrated when you've experiment, you're experimenting and it breaks and you don't know why or you know why and you've just gone down the rabbit hole, right?[00:44:37] So, so I think that's one of the big features that's, that I think I find important because it's It prevents you from doing things you probably shouldn't have, and it, and sometimes we will let you do those things, but we'll try and warn, warn you that you've done that.[00:44:50] I[00:44:51] Alex Volkov: have a follow up question on this, actually, because yesterday we hung out to this open source event, and I spent time by you a couple times, like when people told you, oh, XLR, I use XLR, it's super cool, and then the first thing you asked is, like, immediately, like, what can we improve?[00:45:04] And yes, from multiple folks, and I think we talked about this a little bit, where there's It's a developer tool. It's like a machine learning slash developer tool. Your purpose in this is to help and keep people, as much as possible, like, Hey, here's the best set of things that you can use right now. The bear libraries are, or the bear trainer, for example, is a bear trainer.[00:45:28] And also, maybe we should talk about how fast you're implementing these things. So you mentioned the first implementation took a week or so. Now there's a core maintainer group, right? There's like, features are landing, like Qlora, for example. Neftune, I don't know if that's one example of something that people potentially said that it's going to be cool, and then eventually, like, one of those things that didn't really shake out, like, people quickly tested this out.[00:45:48] So, there's a ton of Wait, Neftune is cancelled? I don't know if it's fully canceled, but based on vibes, I heard that it's not that great. So like, but the whole point that I'm trying to make with Neftune as well is that being existing in the community of like XLR or like, I don't know, even following the, the GitHub options or following the Discord, it's a fairly good way to like, learn these, Kind of gut feelings that you just, you just said, right?[00:46:14] Like where this, maybe this knob, that knob doesn't work. Some of these are not written down. Some of these are like tribal knowledge that passes from place to place. Axel is like a great collection of many of them. And so, do you get That back also from community of folks who just use, like, how do you know who uses this?[00:46:30] I think that's still an issue, like, knowing if they trained with XLR or should they add this to things? Talk about, how do you get feedback and how else you should get feedback?[00:46:38] Wing Lian: Yeah, I mean, most of the feedback comes from the Discord, so people come in and , they don't get a training running, they run into, like, obscure errors or, errors that That's a lot of things that maybe, maybe as a product we could catch, but like, there's a lot of things that at some point we need to go and do and it's just on the list somewhere.[00:46:58] Right that's why when people come up, I'm like, what, what were your pain points? Because like, as a developer tool, if you're not happy with it, or you come in and in the first, Takes you 30 minutes and you're still not happy. You leave the tool and you may, you might move on maybe to a better tool, maybe to, one with less frustration, but it may not be as good, right?[00:47:17] So I'm trying to like, figure out, all right, how can I reduce all this frustration? Because like for me, I use it every day for the most part, right? And so I am blind to that, right? Mm-Hmm. . Mm-Hmm. . I just know, I, I go do this, this, and this. It pretty much mostly works, right? But, so I don't have sort of that, alright, that learning curve that other people are seeing and don't understand their pain points.[00:47:40] Yeah,[00:47:40] Alex Volkov: you don't have the The ability to onboard yourself as a new user completely new to the whole paradigm to like get into the doors of like, Oh, no, I don't even know how to like ask about this problem or error.[00:47:53] swyx: Cool. The last few things I wanted to cover was also just the more advanced stuff that you covered yesterday.[00:48:00] 20x efficiency with StackLlama and Multipack[00:48:00] swyx: So I'll just, caution this as like, yeah, this is more advanced. But you mentioned Stackllama and Multipack. What are they[00:48:06] Wing Lian: and what should people know? Yeah, so, so, Stack Llama was, that paper came out, so Stack Llama I think was like, two, two, two separate, two separate concepts that they announced, so the first one was They being hugging face.[00:48:20] Yeah, sorry, yes, they being hugging face, so the first one being sort of like, this idea of packing, like some packing sequences together, so like, if we think about training data, right, your training data is, let's say, to keep the math easy, let's say your training data is 500, We, we, we, we will use the terminology words.[00:48:39] Let's say your training data is 500 words long, and let's say your, your context length, you know how much data your, that your model can accept is like, or that you want feed into your model. It's, let's say, we won't use tokens again, we'll we'll use it is it's 4,000 tokens, right? So if you're training at 4K Con or four 4,000 4K contacts and you're only using 500 of it, you're sitting like with the other 1500.[00:49:05] 3, 500 words that you're not using, right? And typically that's either filled with these PAD tokens, so I think I made the analogy last night that it's like having sort of like a glass here you fill it up with a shot of liquor and then you're and that's your training data and then you just fill it up with more water and those are your PAD tokens and it's just, it doesn't do much, right?[00:49:27] It's still the same thing, but you still have to go through all of that to go through all your training data. And then, so what Stack Llama showed was you could just sort of take your training data, append the next row of training data until you filled that entire 4k context, so in this example, right, with 500 words to 4k, that's 8 rows of training data.[00:49:48] But, the problem with that is, is that with a lot of these transformer models, they're very much relying on attention, right? So, like, if you now have this sequence of words that now, in order for the, the model has seen all of these other words before, right? And then it sees another set of words, another set of words, but it's learning everything in context of all the words that it's seen before.[00:50:13] We haven't corrected the attention for that. And just real quickly, since I said that that paper was two concepts, the other one was, I believe it was like a reinforcement learning, but outside the scope of this. So going from that, I implemented that early on because I was like, Oh, wow, this is really great.[00:50:29] And. Yes, because it saves you a bunch of time, but the trade off is a little bit of accuracy, ultimately, but it still did pretty well. I think when I did Manicore, I think it used sort of that concept from Stack Llama of just sort of appending these sequences together, right? And then sort of the next evolution of that is Multipack, right?[00:50:51] So, there was a separate paper on that, it was, I believe it was referenced, it got referenced in the Orca paper, where you could, you could properly mask those out using like a, I think it was like a lower block triangular attention mask, and then sort of, so, So, there's that. I did try implementing that, manually recreating that mask, but then one from the OpenChat, so he was helping with OpenOrca as well, and he had done an implementation of Multipack, and where he used FlashAttention, so FlashAttention So that was released by TreeDAO, and it was this huge performance gain.[00:51:35] Everybody uses it now, even the Transformers library now, they've taken all of these, like, people are taking all of these models and sort of like, making it compatible with FlashAttention. But in Flash Tension, there is one particular implementation that lets you say, Well, I'm sending you all of these sequences like you would in Stack Llama, But let me send you another, another, Set of information about, this is where this set of sequences is, this is where the second set of sequences is.[00:52:06] So like, if it was like, 500 words long, and you stacked them all together, you would just send it a row of information that was like, 0, 500, 1000, 1500, etc, etc, out to 4000. And it would know, alright, I need to break this up, and then run the forward pass with it. And then it would be able to, and it was much more, much more performant.[00:52:29] And I think you end up seeing like 10x, 20x improvements over sort of, I mean, I think FlashAttention was like a 2x improvement, and then adding that with the Multipack, you start to see like, depending on, how much data you have, up to like a 20x improvement sometimes. 20x. 20x. Wow. Yeah.[00:52:48] And I only know the 20x because I, like, before last night, I was like, I re ran the alpaca, I looked up the alpaca paper because it was like, I just need a frame of reference where somebody did it, and I think they used eight A100s for three hours, and they said it cost them 100. I don't, I don't think eight A100s cost, I don't know how much it costs right now.[00:53:14] But I ended up rerunning it. Usually a dollar an hour, right? Yeah, so eight. The cheapest is like a[00:53:18] Alex Volkov: dollar, a dollar an hour for one.[00:53:20] Wing Lian: Yeah, so that's still like 24, 25. But maybe if you're going on Azure, maybe it's like, maybe it's 100 on Azure. I mean, it used to be more expensive, like, a year ago.[00:53:31] Yeah, and then, so I re ran it with sort of like, I turned on all of the optimizations just to see what it would be. And like, and usually Multipack is the biggest optimization, so Multipack with Flash Detention. And it, I think I spun it up on 8 L40s, and it ran, and I didn't let it run all the way through, I just grabbed the time, the estimated completion time, and it was like 30 minutes, so it would have cost like 4 or 5 to run the entire, like, reproduce the alpaca paper, right?[00:54:00] Which is crazy. It's crazy. 20x,[00:54:02] Alex Volkov: yeah. I want to ask about, like, you said you turned on all the optimization. Is that the yaml file with xlodl, you just go and like check off, like, I want this, I want that? Yeah, yeah,[00:54:10] Wing Lian: so there's like one particular yaml file in there, That, there's one particular YAML file in there that's like, it's under examples, llama2, fft, optimize.[00:54:20] So, I think someone had created one where they just turned, they put in all of the optimizations and turned them on. I mean, it actually, it does run, which is like, sort of surprising sometimes, because sometimes, you optimize this, optimize this, and sometimes they just don't work together, but, yeah.[00:54:36] Just turn the knobs on, and like, fine tuning should really just be that easy, right? I just want to flip the knob and move on with my life and not figure out how to implement it.[00:54:47] Tri Dao and Mamba[00:54:47] Alex Volkov: Specifically, the guy behind FlashAttention came up with something new. You want to talk about this a little bit? You want to briefly cover Mamba?[00:54:53] Yeah, let's talk about Mamba. Let's talk about Mamba. So, what is Mamba?[00:54:57] Wing Lian: Oh, gosh. I
In this podcast episode, we speak to Ameyaw Kissi Debrah, a blogger and prominent figure in Ghana's online media landscape. Our conversation revolved around the evolution of the internet, blogging, and digital media in Ghana and Debrah's journey in this dynamic field.Ameyaw Debrah reflects on the evolution of blogging and the transition from traditional website-focused platforms to social media. We discuss the intricate balance of adapting consumer behaviour and content strategies to align with the changing digital landscape. This adaptability was evident in his personal journey, transitioning from just covering events to identifying as a blogger in the way of a new internet opportunity. Debrah delved into the transformation of content formats over the years, noting the shift towards social media content. He shared his experiences with early monetisation strategies, such as Google AdSense and direct advertising, and how this evolved into influencer marketing with major brands. The rise of video content, particularly on platforms like TikTok, was another key topic. Debrah shared his experiences and challenges with video platforms and discussed the evolution of his video-focused venture, Ameyaw TV.Looking towards the future, Debrah stressed the importance of evolving with the latest trends to sustain a digital media career. He acknowledged the difficulty in predicting future trends but expected technologies like AI to play a significant role. He emphasised the need for African countries to keep pace with global digital advancements to remain competitive.Debrah also highlighted the importance of diversity in content creation, encouraging exploration of niche areas such as environment, technology, or health. This approach, he believed, could unlock unexplored opportunities in African digital media.Lastly, Debrah shared insights on the challenges of creating a media company focused on video content. He emphasised the importance of starting early, adapting based on audience feedback, and maintaining a passion for one's work. Hosted on Acast. See acast.com/privacy for more information.
20 ideas para crear negocios onlineLa app para escuchar podcasts y ganar Bitcoins es Fountain: https://borjagiron.com/fountainAntes de empezar el episodio de hoy te traigo una herramienta de SEO gratuita que te ayudará a subir posiciones en Google.Podrás detectar fallos de tu web, revisar enlaces entrantes y salientes, realizar un análisis de palabras clave, analizar a la competencia y configurar alertas.Sólo tienes que entrar en https://borjagiron.com/ahrefs y empezar a usar la que posiblemente sea la mejor herramienta de SEO gratuita del mercado. Recuerda, borjagiron.com/ahrefs Te dejo el enlace en la descripción.Mejor hosting: HostingerPara crear tu web, blog o tienda online al mejor precioCómo comprar Hosting y Dominio con Hostinger al mejor preciohttps://triunfacontublog.com/mejor-hosting-wordpress/Consigue descuento: https://borjagiron.com/hostingerMuy buenas y bienvenido al podcast “Marketing Digital”, soy Borja Girón y cada martes aprenderás todo lo necesario para conseguir más clientes, visitas e ingresos en tu negocio online. Si eres emprendedor es el momento de hacer crecer tu negocio. Recibe cada día gratis en tu email mis secretos para emprender con éxito a través de historias, aprendizajes, herramientas y trucos directos para poner en práctica.Tendrás en menos de 3 minutos una formación diaria que marcará un antes y un después en tu negocio.Apúntate gratis desde: https://borjagiron.comY ahora sí…¿Estás preparado? ¿Estás preparada? ¡Comenzamos!1: Ofrece tus servicios: (diseñador copy, traductor, editor…) Alta en plataformas como Fiverr, Upwork, Freelancer, Guru, Toptal, PeoplePerHour.com, Truelancer, SolidGigs, DribbblePuedes usar IA en todas. ChatGPT, Bing, OpusClip…https://borjagiron.com/mejores-herramientas-inteligencia-artificial/¿Más ideas de negocio?https://www.theopenprojects.io/https://www.indiehackers.com/https://www.producthunt.com/Códigos ya hechos: https://codecanyon.net/https://www.starterstory.com/- Éxito The Fork asociado con Google Maps para añadir botón de reservas de mesa en restaurantes con un par de clicksEspecialízate con una herramienta para un público.Con IA para dentistas.Con Shopify para psicólogos.1: Copywriter2: Editor de vídeo o audio. Creador de vídeos con IA. Curso CapCut.3: Community Manager4: Asistente Virtual (Gestión emails, Canva)5: Crear webs, blogs, tiendas online de nicho con Google Adsense y afiliados: https://borjagiron.com/google-adsense/ 6: Diseñador web especializado en Memberships para dentistas. Acelera web.7: Fotógrafo de productos. Vender fotos en Shutterstock. Creas la web para boda.8: Editor de podcasts. Crea podcasts y gana dinero con patrocinios. Busca entrevistados y que te paguen. Crea música para podcasts.9: Programador de apps. App tipo Tinder que paguen los dos solo al empezar a chatear después del Match y no por alcance. Con anuncios. Que se pague para ver quién te rechaza y así saber que sí te muestran y mejorar tu perfil. 10: Consultor. Yo Soy consultor Marketing Digital.11: Crea diseños en Canva y los vendes. Plantillas Excel. Plantilla para Google Presentaciones, para Instagram…12: Traductor de textos. Con IA.13: Editor de libros. Lo subes a Amazon, corriges, creas portada… 14: Escritor. 15: Creador de contenido / Influencer. YouTuber. Podcaster. Blogger. Newsletter de pago con Substack. Suscripción en Instagram o X. Onlyfans. Cursos. Afiliados. 17: Crea comunidad de pago en Telegram, Discord o WhatsApp 18: Profesor online. Inglés, yoga, informática para personas mayores. Vende cursos. Da clases online y vender cursos. Hotmart. https://borjagiron.com/plataformas-educativas/19: Crear tienda online. Afiliados. Dropshipping. Productos propios. Wallapop y Vinted. https://sell.amazon.es/comienzaEtsy20: Vendedor. Setter. Closer de ventas.Puntos a tener en cuenta:Todo lo puedes apoyar con una Newsletter.Ventajas y desventajas.Haz tu DAFO.Que te paguen.Landing page.Crea contenido.Programas de afiliación con pago recurrente de por vida en un próximo episodioComunidad Emprendedores Triunfers: https://borjagiron.com/comunidadRecuerda suscribirte al podcast para no perderte el resto de noticias, novedades, trucos y tendencias del Marketing. Si quieres seguir escuchando estos episodios compártelo, dale a me gusta, deja 5 estrellas o comenta el episodio.También puedes acceder a mis cursos de Marketing Digital desde https://triunfacontublog.com Recibe mis secretos para emprender con éxito cada día en tu email: https://borjagiron.com/newsletterSoy Borja Girón, has escuchado el podcast Marketing Digital, nos escuchamos en el próximo episodio.
ESA y NASA avanzan con sus naves reutilizables / Arm invierte en Raspberry / Geotermia en los aparcamientos / Dominios .ING / Los drones se rinden en Ucrania Patrocinador: El proyecto CRECE de Cruz Roja es una nueva iniciativa para luchar contra la soledad no deseada. Es un proyecto innovador desde el punto de vista tecnológico y social, con el objetivo de diseñar y probar nuevos métodos de ayuda social que eviten la institucionalización. — Si te encuentras en esta situación, quieres colaborar o ser voluntario, apúntate. ESA y NASA avanzan con sus naves reutilizables / Arm invierte en Raspberry / Geotermia en los aparcamientos / Dominios .ING / Los drones se rinden en Ucrania ⚡ Proponen utilizar los aparcamientos subterráneos para energía geotérmica. Académicos alemanes calculan que los coches que aparcan en los subsótanos de edificios, calientan el agua subterránea de sus alrededores por su mera presencia. Suficiente energía para 15.000 hogares en Berlín.
Kaum ist das Google Core Update vom Oktober vorbei, läuft schon das nächste Core Update. Und das ist nicht alles: Das nächste Reviews Update ist bereits angekündigt. Google hat eingeräumt, dass es beim Core Update vom Oktober Probleme gab, die sich auf die Darstellung in Google Discover ausgewirkt haben. Nachdem diese Probleme behoben sind, können manche Websites mit mehr Traffic aus Google Discover rechnen. Google AdSense bezahlt Publisher ab dem kommenden Jahr für Impressionen anstatt für Klicks. Das könnte zu mehr Werbung auf manchen Websites führen. Jahre nach dem ursprünglich geplanten Ende hat Google jetzt die Umstellung auf Mobile First-Indexierung abgeschlossen. Wichtig für Helpful Content ist laut Google vor allem die Originalität der Inhalte.
Welcome back to another episode of the Niche Pursuits News Podcast. This week's episode is seriously packed with a ton of important and interesting headlines in the SEO, digital marketing, and content creation space. So grab some popcorn, get comfortable, and let's get started! Spencer and Jared kick off the episode by talking about Google's latest announcement about an impending November core update, which started to roll out on November 2nd. We can expect to see the effects over the weekend. They talk about Google's explanation for the latest update and how there's also a review system update coming next week. Moving on, they share an article in The Verge accusing SEOs of ruining the internet. Spencer summarizes the 8000-word article and he and Jared discuss whether the author ultimately blames SEOs or Google itself for bringing down the quality of the internet. What does Jared think about the article? Is The Verge hypocritical for publishing an article like this? What happened to Danny Sullivan, the Google employee who responded to Spencer's tweet? Tune in to find out! The conversation then shifts to an article on Detailed, and a tweet, by Glen Allsopp, about product review affiliate keywords. Allsopp shares his findings after analyzing 10,000 search results and he shares some staggering statistics for sites like Reddit and Quora. The next news item on the agenda has to do with Google's efforts to fix a bug from the October 2023 update, which impacted Google Discover traffic. Does that mean people who got massive traffic from Google Discover recently will see it disappear? Will people without much Google Discover traffic see it increase? Continuing with news about Google, Spencer and Jared share an article that shows how Google is inserting paid ads among organic search results. They talk about the implications of this move and how it may or may not affect publishers. Then Spencer shares an article about updates to Google AdSense, as the network is shifting to per-impression payments for publishers. How will this impact their earnings? Why is Google doing this? Since the network has always been pay-per-click, this is a major change to how it functions. Tune in to hear what Spencer and Jared have to say. Jared geeks out on the last news item on the agenda, an analysis of Google's EEAT Knowledge Graph. A recent article took a deep dive into how Google understands what is and isn't expert content and the importance of getting into the Knowledge Panel and building a brand. Jared shares some good ideas on how to get Google's stamp of approval, so don't miss them! When it comes to Side Hustle Shenanigans, Spencer provides an update on his second faceless YouTube channel, for which the stats are underwhelming. As he is completely hands-off in this endeavor, his advice for anyone looking to do something like this is to be very involved and detail-oriented. Listen in to find out what he thinks about this business model. Jared reports on his Amazon Influencer side hustle, noting that October was his worst month ever. Why was this? What's driving all the volatility? What does he expect in the coming months? There's just a little bit of time left, and Spencer shares his weird niche site first: Astronaut.io. This fascinating site features videos from YouTube that have zero views. With 105k visitors over the last three months according to SimilarWeb, it seems that some people (mostly in Russia), are enjoying them. Jared's weird site is in the celebrity niche, Who's Dated Who, which offers the dating history of every celebrity you can think of. He and Spencer discuss the site's ranking system, how it might be getting its data, its use of a forum, and how it's monetized. This DR56 site is ranking for over 600k keywords, and monthly estimated organic traffic of 1.2 million. And that brings us to the end of another great episode. Thanks for tuning in for another week and getting the latest in SEO news. See you next Friday! Be sure to get more content like this in the Niche Pursuits Newsletter Right Here: https://www.nichepursuits.com/newsletter Want a Faster and Easier Way to Build Internal Links? Get $15 off Link Whisper with Discount Code "Podcast" on the Checkout Screen: https://www.nichepursuits.com/linkwhisper Get SEO Consulting from the Niche Pursuits Podcast Host, Jared Bauman: https://www.nichepursuits.com/201creative
Episode 184 contains the important Digital Marketing News and Updates from the week of Oct 23-27, 2023.1. Google Introduces “Search Themes” - A New Optional Performance Max Signal : Google has introduced a new feature called "Search Themes" for its Performance Max campaigns. If you're not familiar, Performance Max campaigns use Google's AI to automatically place ads across Google's landscape after analyzing your budget, assets, feeds, and landing pages to predict valuable placements. . The new feature allows advertisers to have more control over these automated campaigns by providing specific topics or categories relevant to their business. This helps the AI system better understand what kind of traffic would be most beneficial for your business - better targeting means more relevant traffic, and more relevant traffic means higher conversion rates.Why should you care? Traditional keyword-based advertising has its limitations. For instance, if you've just launched a new product or entered a new market, you might not have enough data for effective keyword targeting. Search Themes fill this gap by allowing you to provide additional context about your business, helping Google to better understand and target your ads. You can add up to 25 search themes per ad group in your Performance Max campaign, and these themes will be treated like phrases and broad match keywords in regular search campaigns. Search themes will respect any brand exclusions and negative keywords you've set at the account level. This is a significant step towards giving advertisers more control over automated systems, which has been a long-standing request from the business community.Google has launched this feature in beta and plans to add more robust search term insights and guidance around utilizing Search Themes in 2024. Early feedback from pilot testers has been positive.2. Google's Latest Update on Structured Data! - Google has just rolled out an update that allows you to mix different types of structured data on your website. If you're wondering what structured data is, it's a way to label your website's content so search engines like Google can understand it better. This helps improve your website's visibility in search results, which is crucial for attracting more customers.Before this update, you had to choose between using JSON-LD or Microdata formats for your structured data. Each has its own pros and cons. JSON-LD is easier to maintain and read, while Microdata integrates directly into your HTML. Now, Google says it's okay to use both formats together. This flexibility can make your website more efficient and effective in communicating with search engines.Why is this important for you? Imagine you run a blog on your business website. Previously, you might have had to duplicate your article content in the structured data to make it understandable to Google, making your code bulky. With this update, you can use Microdata for the article content and JSON-LD for other metadata, avoiding unnecessary duplication. This means cleaner, more efficient code, which can lead to faster load times and a better user experience.This change is optional, so you don't have to rush to update your existing structured data. But it opens up new possibilities for optimizing your website. Whether you're looking to improve your site's SEO or streamline its code, this update offers a valuable opportunity to do both.3. Don't Let Google Penalize You: Avoid These Content Mistakes Now! - Are you aware that the type of content you publish on your website can either make or break your online presence? Google's Search Liaison, Danny Sullivan, recently shed light on what the search engine considers "unhelpful content." In simple terms, unhelpful content is material written primarily to rank well in search engine results, rather than to serve your audience's needs.Why should you care? Google is the gatekeeper to your online visibility. If your content is deemed unhelpful, it could seriously harm your website's search engine ranking. This means fewer people will find your business online, leading to lost opportunities and revenue.Sullivan pointed out that if you're writing articles like "20 Fun Things You Can Do Today" just to rank well for the term "fun things," then you're on the wrong track. The focus should be on creating content that is genuinely useful to your audience. He also cautioned against using tools that suggest what to write about based on what might rank well. According to him, this approach often leads to content that Google considers unhelpful.So, what's the takeaway? Shift your focus from trying to please search engines to meeting the needs of your audience. If someone asks a question and your content provides a clear answer, that's "people-first content." And guess what? Google loves it!Don't risk your online visibility by making easily avoidable mistakes. Make the shift to audience-focused content today and watch your online presence grow.P.S. Google's guidelines are not just rules but opportunities to improve. Take action now to align your content strategy with what truly matters!4. The Real Culprit Behind Your Website's Ranking Drop: It's Not CLS! - Have you noticed a sudden drop in your website's Google search rankings and are scrambling to find out why? You might be tempted to blame it on "Cumulative Layout Shift" (CLS), especially if you've recently received warnings about it from Google Search Console. But hold on, Google's John Mueller has made it clear: CLS is not the reason for sudden, significant drops in search rankings.Firstly, what is Cumulative Layout Shift? It's a metric that measures the visual stability of your website. For example, if elements on your webpage move around as it loads, that's a high CLS score, and it's generally considered bad for user experience. However, according to Mueller, even if you have issues with CLS, it's not going to cause a drastic drop in your Google rankings.Why is this important for you? Because focusing on the wrong issue can waste your valuable time and resources. Google has consistently stated that page experience signals, like Core Web Vitals (which includes CLS), are not significant ranking factors. They might act more like a "tie-breaker" than a major ranking signal. So, if you've seen a drop in traffic, the culprit is likely something else.In summary, while it's good to optimize for a better page experience, don't panic or "over-focus" on metrics like CLS when you see a drop in your rankings. Your time is better spent analyzing other potential issues that could have a more significant impact on your website's performance.P.S. Understanding what really affects your website's ranking can save you from unnecessary stress and help you focus on what truly matters for your business. Don't chase the wrong problems!5. Stop Blaming Your Web Host: The Real Story Behind Google's "Hostload Exceeded" Error! - If you've been scratching your head over the "Hostload Exceeded" error message on Google Search Console, you're not alone. This error has left many website owners puzzled, leading them to question their web hosting services. But Google's John Mueller has clarified that the issue is not with your web host; it's something else entirely.First, let's break down what "Hostload Exceeded" means. This error appears when you try to index your website's pages using Google Search Console. Indexing is crucial because it helps Google understand your website's content, making it searchable and visible to potential customers. So, when you see an error like this, it's natural to worry.However, Mueller has stated that the problem is not with your web host or even with Google's crawling and indexing processes. Instead, the issue arises when people "spam" the URL inspection tool by submitting too many URLs for indexing. In other words, the error is a result of user behavior, not a technical glitch or quality issue with your website.Why is this important? Because understanding the real cause behind this error can save you time and effort. You don't need to switch web hosts or make drastic changes to your website. Instead, be mindful of how many URLs you're submitting for indexing. Normal crawling and indexing by Google will happen naturally, so there's no need to force the process.P.S. Transparency and accurate information are key to solving problems. Don't waste time fixing what's not broken; focus on what truly matters for your website's success!6. Google's New Privacy Feature : IP Protection - Google is taking a significant step in enhancing user privacy with its new IP Protection feature for Chrome. If you're wondering what IP Protection is, it's a feature designed to mask users' original IP addresses, making it harder for websites to track them. This is crucial for you as a business owner because it could impact how you target and reach potential customers online.Why is IP Protection important? In today's digital age, privacy is a growing concern. Many users are wary of how their data is being used, and Google's new feature aims to address this by limiting cross-site tracking. This means that the feature could potentially disrupt traditional online advertising methods that rely on tracking users' behavior based on their IP addresses.Here's how it works: Users will need to opt-in to activate IP Protection. Initially, the feature will focus on Google-owned domains and be available for U.S.-based IP addresses. Google plans to roll out this feature in phases, starting with a single company-owned proxy server responsible for routing web traffic. Future updates will include a more complex system for added privacy.So, what does this mean for your business? If you rely heavily on targeted advertising, you may need to rethink your strategies. The feature is still in its early stages, but it's essential to stay ahead of the curve and consider how these privacy changes could affect your marketing efforts.P.S. Privacy is not just a user concern; it's a business concern too. Stay updated and adapt your strategies to meet the evolving digital landscape. Don't get left behind! 7. Google's Q3'23 Ad Revenue Bounces Back - Google's parent company, Alphabet Inc., has reported an 11% year-on-year increase in search advertising revenue for Q3 2023. If you're wondering why this matters to you, it's simple: this uptick indicates a stabilizing ad market, which could be a golden opportunity for your business.Why is this important? The 11% gain in search revenue is a significant improvement from the 5% loss reported in the previous quarter. This suggests that the digital advertising landscape is recovering, making it a ripe time for businesses like yours to invest in online advertising. Alphabet's CFO, Ruth Porat, stated that the "fundamental strength of our business was apparent again in Q3," with a total revenue of $77 billion, up 11% year over year.The report also highlighted a 12.5% increase in YouTube ad revenue, while Google's advertising network saw a 2.6% decline. However, this decline is an improvement over the previous quarter, signaling a positive trend. Sundar Pichai, Google's CEO, emphasized the role of AI-driven innovations in driving this growth, particularly in search and YouTube.8. IndexNow Impressive Growth : 1.4 Billion URLs Submitted Daily For Indexing - If you're a business owner with an online presence, you know how crucial it is for your website content to be up-to-date in search engine results. The problem? Search engines often lag behind in reflecting the latest changes on your website. This is where IndexNow comes in, a service that has made significant strides in solving this issue.Established two years ago, IndexNow aims to streamline how websites communicate their content changes to search engines. The service has seen exponential growth, with 60 million websites joining daily and a staggering 1.4 billion URLs submitted each day. The platform bridges the gap between search engine results and real-time website content by sending a simple "ping" to participating search engines whenever a URL is added, updated, or deleted. This ensures that search engines crawl only the updated content, making the process more efficient for both businesses and search engines.IndexNow is integrated with popular platforms like WordPress, Wix Premium, and Duda, making it easy for website owners to adopt. If you're using SEO plugins like Yoast, All-in-One SEO, RankMath, or SEOPress, IndexNow is already included. Even if you're not using these services, activating IndexNow is straightforward. All you need to do is generate an API key, host it on your web server, add the necessary code to your website, and monitor the details via webmaster tools.9. Microsoft's PubCenter Relaunch: The Google AdSense Alternative - If you're looking to monetize your website, Microsoft has relaunched its PubCenter as a compelling alternative to Google AdSense. For those unfamiliar with these terms, monetizing your website means displaying ads to earn revenue. Google AdSense has been the go-to platform for this, but Microsoft's PubCenter is stepping up as a strong competitor.Why should you care? PubCenter offers a way to display both native and display ads from Microsoft's advertising network. The platform is not new; it's been around since 2008. However, Microsoft is repositioning it as a U.S.-only pilot program. The process is simple: choose an ad format, add some code to your website, and start earning every time an ad is displayed. There are no signup costs, revenue minimums, or volume requirements.What sets PubCenter apart? Microsoft claims to offer "higher engagement and more revenue" compared to Google AdSense. The platform allows you to use their ads alongside Google AdSense ads, serving Microsoft's ads only when they predict a higher bid for you. This flexibility can be a game-changer for small and mid-sized publishers looking to maximize their ad revenue.Currently, PubCenter is open only to U.S.-based businesses, but if you're outside the U.S., you can join a waitlist for when international support is added. If you've been relying solely on Google AdSense, this could be the perfect time to diversify your revenue streams.10. Microsoft's Q3 Surge in Ad Revenue - Microsoft has just reported a remarkable 10% year-on-year increase in its search and news advertising revenue for Q3 2023. If you're not familiar with the world of online advertising, this is a significant metric that indicates the health of the digital advertising ecosystem. As a business owner, this news should catch your attention because it signals a recovering ad market and the growing importance of diversifying your advertising platforms.Why is this surge significant? For starters, it marks a substantial jump from last quarter's 3% increase. This growth suggests that ad spending is bouncing back after the economic downturn, offering a more fertile ground for your business to advertise and reach potential customers. Microsoft's overall revenue in productivity and business processes also rose by 13% to $18.6 billion, further emphasizing the company's strong market position.So, what does this mean for you? If you've been solely relying on Google for your online advertising, now might be the time to consider Microsoft's platforms as well. With the ad market recovering and Microsoft showing strong performance, diversifying your advertising strategy could be a wise move.11. Meta's Q3 Profits Skyrocket to $11.6 Billion - Meta Platforms, Inc., the parent company of social media giants like Facebook, Instagram, and WhatsApp, has reported a staggering $11.6 billion in profits for Q3 2023.So, what's driving this success? Meta's Q3 revenue soared by 23% year-on-year to $34.15 billion. The company saw a 31% increase in ads viewed during the quarter, even though the average price per ad decreased by 6%. This is the smallest decline in seven quarters, signaling a robust ad market. Meta's CFO, Susan Li, attributes this to "ongoing improvements to ad targeting and measurement," which are driving better results for advertisers.Cost-cutting measures also played a role. Meta has reduced its workforce by about a third and cut expenses by 7% from a year earlier. The company is also heavily investing in AI-powered marketing planning and ad measurement to drive growth. Meta CEO Mark Zuckerberg announced plans to hire more AI-focused technologists, emphasizing the role of AI in the company's future.12. HubSpot and TikTok's Game-Changing Partnership For B2B Lead Generation - HubSpot and TikTok have joined forces to redefine B2B lead generation. If you're a business owner, you know how crucial lead generation is for growth. This partnership aims to make that process more efficient and cost-effective. HubSpot's CRM (Customer Relationship Management) platform will now integrate seamlessly with TikTok, allowing businesses to automatically capture leads from the social media giant. This is TikTok's first CRM lead generation collaboration.Why is this important? Small and medium-sized businesses are grappling with rising customer acquisition costs. HubSpot's research shows that 53% of such businesses in the U.S. have seen these costs go up from 2021 to 2022. TikTok, a platform where over half of its U.S. users discover new brands, aims to alleviate this issue.The integration offers automated lead capture from TikTok, turning its highly engaged audience into potential high-value customers. Once you link your TikTok for Business account with HubSpot, you can create lead-generation ads that automatically sync leads into HubSpot's CRM in real-time. This centralizes all your prospects, making it easier to manage your sales funnel. Plus, you can engage with these new leads using HubSpot's Marketing Hub and determine the effectiveness of your campaigns through AI-powered analytics.As an added incentive, the first 500 advertisers to integrate HubSpot CRM with TikTok will receive $200 in TikTok ad credits. Currently, this integration is only available in the U.S. and Canada but is expected to expand to other countries soon.
If you're in the world of finance, you'd know today's guest from YouTube — but you've probably never heard his real name. However, today and for the first time, he chooses to associate his actual identity with his YouTube channel! It's an honour to introduce to you, Mr. How Money Works himself, Darin Soat. On his YouTube channel, Darin combines captivating storytelling with high-quality, sensible information that helps you to make better financial decisions. Today, he joins us to explain How Money Works, why he chose to create his channel anonymously, and how he feels after his grand reveal. He describes how his channel informed his career as an investment banker, and gives us his insider breakdown of how influencer businesses work. Then, we dive deep into YouTube as we explore the problems with today's financial influencers (finfluencers), how these problems are carried through to the crypto market, why it's rare to find high-quality financial information on YouTube, and everything you need to know about the gamification of investing, creating passive income, and the ins and outs of investing from the perspective of one of YouTube's top finfluencers, Darin Soat! Key Points From This Episode: We're thrilled to reveal the real identity behind How Money Works – Darin Soat! (0:00:42) Darin's professional background. (0:01:41) A thorough description of How Money Works, straight from the source. (0:04:33) Exploring Darin's background in investment banking. (0:07:08) How he chooses content for his YouTube channel. (0:11:05) Why he created his channel anonymously, and how he feels after his reveal. (0:12:54) How his channel impacted his work while he was still in investment banking. (0:14:54) Darin's summation of how influencer businesses work. (0:17:00) The problems with today's financial influencers on YouTube. (0:21:17) How the aforementioned problems relate to the crypto market. (0:27:11) His criteria for selecting sponsors for How Money Works. (0:32:03) Why high-quality personal financial information is rarely seen on YouTube. (0:33:45) Darin's advice on side hustles and creating passive income. (0:36:25) How financial influencers and the gamification of investing affect real-world investors. (0:40:11) What everyone needs to know about investing, according to Darin. (0:50:38) Links From Today's Episode: Darin Soat on X — https://twitter.com/DarinSoat How Money Works — https://www.youtube.com/@HowMoneyWorks How Money Works on X — https://twitter.com/howmoneyworksyt Compounded Daily — https://www.compoundeddaily.com/ How History Works — https://www.youtube.com/channel/UCb9mpGh9PQjFXOG_irzrFoA Google AdSense — https://adsense.google.com/start/ The Index Card — https://www.amazon.com/Index-Card-Personal-Finance-Complicated/dp/1591847680 Dumb Money — https://www.imdb.com/title/tt13957560/ MrBeast on YouTube — https://www.youtube.com/@MrBeast Patrick Boyle on YouTube — https://www.youtube.com/@PBoyle ‘Reasons to Avoid Index Funds' — https://www.youtube.com/watch?v=fvGLnthJDsg Robinhood Crypto — https://robinhood.com/us/en/about/crypto/ Rational Reminder on iTunes — https://itunes.apple.com/ca/podcast/the-rational-reminder-podcast/id1426530582. Rational Reminder Website — https://rationalreminder.ca/ Rational Reminder on Instagram — https://www.instagram.com/rationalreminder/ Rational Reminder on X — https://twitter.com/RationalRemind Rational Reminder on YouTube — https://www.youtube.com/channel/ Rational Reminder Email — info@rationalreminder.ca Benjamin Felix — https://www.pwlcapital.com/author/benjamin-felix/ Benjamin on X — https://twitter.com/benjaminwfelix Benjamin on LinkedIn — https://www.linkedin.com/in/benjaminwfelix/ Cameron Passmore — https://www.pwlcapital.com/profile/cameron-passmore/ Cameron on X — https://twitter.com/CameronPassmore Cameron on LinkedIn — https://www.linkedin.com/in/cameronpassmore/
Neste episódio, mergulhamos no mercado digital, explorando a repetição de ciclos e tendências. Compartilho que muitas pessoas se questionam sobre o motivo pelo qual as novidades não funcionam para elas, mas isso é algo comum. A chave está em identificar as novidades desde o início para obter mais sucesso financeiro. Relembro vários ciclos que presenciei ao longo da minha carreira, e agora a moda do momento no conceito de "high ticket". Onde abordo diferentes estratégias de vendas, enfatizando que todas têm potencial, especialmente no início. Aqui também relembro minha jornada inicial de ganhar dinheiro online por meio de blogs, através de parceiros como Google Adsense, Amazon, Clickbank e Aweber, e destaco as experiências incríveis que pude vivenciar graças a esses ganhos. Enquanto menciono os lançamentos e cursos que ocorreram ao longo dos anos, sempre enfatizo que o sucesso dos alunos é mérito deles. E acredito firmemente que ter o timing certo no mercado é crucial, assim como a estratégia de começar pequeno e expandir conforme os resultados. Aqui também abordo a importância de se proteger contra problemas como o bloqueio de contas de anúncios. Nos próximos episódios abordarei a simplificação de negócios, estratégias de preços e formas de vender com mais eficiência. E reforço a ideia de que é essencial acompanhar a evolução do mundo e prometo revelar um novo modelo de negócios que pode impulsionar os ouvintes a tirarem seus negócios da estagnação. Até a próxima! _ Acompanhe Bruno nas redes: YOUTUBE → https://www.youtube.com/c/brunopicinini INSTAGRAM → https://www.instagram.com/brunopicinini PODCAST → https://brunopicinini.com/podcasts
In this episode, Beth Scheer, Talent Partner at Homebrew, talks about the drivers for hiring a coach and what to look out for while going through the process. Key Takeaways: Why would a founder should look for a coach? Big commitment to hiring a coach Make sure you do a reference check with people the coach has worked with the right questions are a must. A good coach for someone else might not be the right coach for you How to interview a coach Costs of hiring a coach Red Flags to look for when hiring a coach About today's guest: At Homebrew, Beth Scheer works closely with founders and their executive team members as an advisor on all things Talent-related, including recruiting, diversity & inclusion, compensation, onboarding, and people operations. Before Homebrew, Beth spent 5+ years at Salesforce leading executive search, sales leadership search and the sales growth/professional services recruiting teams. Before that, she built the Google AdSense team in 2003 and then spent 6 years hiring for business operations, corporate communications, and various engineering teams. She is a proud graduate of The Colorado College with a BS in Psychology. https://www.linkedin.com/in/bethscheer/ beth@homebrew.co ___ Thank you so much for checking out this episode of The Talent Tango, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests? Please message me at https://www.linkedin.com/in/amirbormand (Amir Bormand)
The giant Chinese property developer, Evergrande, has played down its decision to file for bankruptcy protection in the United States. The company described the move as a 'normal procedure'. We hear concerns about property values are falling faster than Beijing has revealed. And we will look at the Women's World Cup final over the weekend - England will take on Spain. Co-hosting the tournament was expected to generate about a third of a billion dollars for the Australian economy. Google AdSense - a technology used by the company to serve advertisements based on website content - does not support indigenous African languages. So what is Google doing to help African language websites monetise their content?
Filing for bankruptcy protection will allow the heavily-indebted Evergrande to protect its assets in the US as it works on a multi-billion dollar deal with creditors. The company defaulted on its huge debts in 2021, which sent shockwaves through global financial markets. The move comes as problems in China's property market add to concerns about the world's second largest economy. England will take on Spain in the Women's World Cup final over the weekend. Co-hosting the tournament was expected to generate about a third of a billion dollars for the Australian economy. Google AdSense - a technology used by Google to serve advertisements based on website content - does not support indigenous African languages. So what is Google doing to help African language websites monetise their content?
It's Friday, so it's time for another episode of Niche Pursuits News, with your hosts Spencer Haws and Jared Bauman. This week, as always, they bring us the latest in SEO and digital marketing news. They kick things off with a big news item: Google has announced that it's testing live links in the Search Generative Experience. The good news is that it looks like an expanded featured snippet that may drive more traffic to our individual websites. Spencer shares an example of what it looks like currently and he and Jared talk about how Google may be testing different formats to see what users prefer. They also talk about Google's announcement of “new” things you can do with generative AI in search, the fact that AI-powered overviews are faster, and Google's focus on article publish dates. Spencer also shares a look at his SGE and he and Jared talk about whether or not the results they get are relevant and what the future might look like for SGE. The next portion of the podcast is Shiny Object Shenanigans. Spencer begins by talking about his Amazon Influencer side hustle, which earned around $1800 over the last month. He talks about how he obtained approval into the program for a Facebook page he's been building (and spoke about on a recent episode) and handed over the entire project to his oldest son. Jared then shares an update on his Amazon Influencer program, which is up to 329 videos and brought in $2870 in July. He only started in May, so growth is quite impressive, and he has plans to continue to add more videos and take advantage of this massive opportunity. The next topic is the Weird Niche Site they've both discovered. Jared goes first and talks about Marcas de Whiskey (Whiskey Brands, in Spanish), a whiskey review site that was started last year. It's a DR 9 and ranks for over 50k keywords. Although it started out in Spanish, it switched over to English and has seen its traffic triple over time, publishing over 1200 articles in a year. Jared talks about how well the author satisfies search intent and how this may be the main driver of traffic. Spencer nails it this week by finding a site that's both weird and very successful: Wheel of Names. This site, a DR 71, gets between 11 and 5 million visitors each month, according to Ahrefs and Similarweb, and traffic is growing. It's ranking for all kinds of “wheel” related queries and is monetized with Google Adsense ads. All in all, another great episode from Jared and Spencer with lots of interesting news, ideas, and advice. Tune in next week for more! Be sure to get more content like this in the Niche Pursuits Newsletter Right Here: https://www.nichepursuits.com/newsletter Want a Faster and Easier Way to Build Internal Links? Get $15 off Link Whisper with Discount Code "Podcast" on the Checkout Screen: https://www.nichepursuits.com/linkwhisper
¿Google manda menos visitas a tu web?Antes de empezar el episodio de hoy te traigo una herramienta de SEO gratuita que te ayudará a subir posiciones en Google.Podrás detectar fallos de tu web, revisar enlaces entrantes y salientes, realizar un análisis de palabras clave, analizar a la competencia y configurar alertas.Sólo tienes que entrar en https://borjagiron.com/ahrefs y empezar a usar la que posiblemente sea la mejor herramienta de SEO gratuita del mercado. Recuerda, borjagiron.com/ahrefs Te dejo el enlace en la descripción.Muy buenas y bienvenido al podcast “SEO para Google”, soy Borja Girón y cada miércoles aprenderás todo lo necesario del Posicionamiento Web que te permitirá aumentar tus visitas, salir en las primeras posiciones de Google y generar más ventas. Recuerda unirte a la Comunidad Emprendedores desde: https://borjagiron.com/comunidad y podrás acceder a las sesiones de Mastermind cada lunes conmigo y el resto de emprendedores, al podcast secreto, a los retos y las categorías dentro del grupo de Telegram sobre Instagram, RRSS, Finanzas, criptomonedas, salud, Inteligencia Artificial, marketing, podcasting, productividad, SEO y todo lo necesario para desbloquear tu negocio.Y ahora sí…¿Estás preparado? ¿Estás preparada? ¡Comenzamos!La respuesta rápida es sí.Desde hace años muestra más resultados directos.Fragmentos que responden a la búsqueda del usuarios.Sin necesidad de hacer click.Ejemplos de búsqueda:- Cuánto es un Bitcoin en euros.- El tiempo en Madrid.- Vuelo Madrid - Barcelona.- Noticias.Con la llegada de la IA se tendrán más respuestas directas.Pero… 1: Google vive de los anunciantes.2: También de los anuncios que meten en las webs y blogs con Google Adsense.No les interesa tanto dejar de mandar tráfico.Pero pueden ir cambiando su modelo de negocio.Los cambios hacen que eso pase. Factores:1: La gente pasa menos tiempo buscando en Google y se comporta de forma distinta.Antes leían blogs y ahora escuchan podcasts.Antes buscaban en Google y ahora en TikTok para restaurantes o viajes.Empiezan a usar ChatGPT.Tenemos que buscar contenidos y conseguir que la gente quiera saber más.1: Escribe persuadiendo.2: Ofrece la solución pero que se necesite saber más haciendo click. En Description para mejores herramientas de email marketing las pongo directamente. Pero los descuentos o los vídeos no están.3: Crea vídeos y añádelos. Al final del vídeo comenta que dejas enlace debajo.4: Usa otras fuentes de tráfico. Ads, colaboraciones, newsletter…5: Busca otras formas de llegar al público (podcast, YouTube, RRSS, newsletter, libros, Telegram)6: Añade tus productos donde está ya el público como Amazon o Udemy.7: Detectar nichos en los que la IA o la respuesta directa no aplica. Google Trends. ChatGPT. Google Keyword Planner.Ejemplo: Descargas. Descarga de música, descarga de plantillas, Descarga de informe.Ejemplo: Tienda online.Ejemplo: Blog contando historia. Ranking. Listados.Empieza a actuar ya.Únete a la comunidad de Emprendedores: https://borjagiron.com/comunidadRecuerda suscribirte al podcast para no perderte el resto de noticias, novedades, trucos y tendencias del mundo del SEO. Si quieres seguir escuchando estos episodios compártelo, dale a me gusta, deja 5 estrellas o comenta el episodio.También puedes acceder al curso de SEO desde https://triunfacontublog.com Soy Borja Girón, has escuchado el podcast SEO para Google, nos escuchamos en el próximo episodio.
Vellonera o “Jukebox” Hace años que vengo advirtiendo que las redes sociales se han convertido en un lugar donde impera el “pay for play”, comúnmente conocido en el argot de la radio comercial como “payola”. La payola consiste en pagarle secretamente o “por debajo de la mesa” a un anfitrión de radio para que le dé preferencia a una canción por encima de las demás. Todo el que conozca un poco de radio sabe que la popularidad de una pieza musical está íntimamente ligada a la cantidad de veces que se escuche en la radio. Claro, hoy en día —que la radio está agonizante— esa popularidad se obtiene también por otros medios. Pero, todavía la radio influye en que una pieza se torne popular o no. ¿Pero sabes cuál es el problema? ¡Que la payola es ilegal!!! Y lo ha sido desde el 1934. Eso no quiere decir que la práctica no exista. Lo que quiere decir es que si te cogen las multas de la FCC son altísimas. En el mundo de la Internet también existe payola. La gran diferencia es que es más descarada y encima de eso legal. ¿No me crees? ¿Qué piensas que es Google Adsense, YouTube Ads, Facebook Ads, LinkedIn Ads, Twitter Ads, Instagram Ads, etc, etc, etc (como decía Yul Briner en “The King And I”). En todos estos servicios pagas para que el público vea tu contenido. Sí no, la red social se beneficia comoquiera y tu no logras alcance alguno. Hasta le tienen un nombresito de lo más gracioso. Le llaman: “user generated content”. Por eso es que tan temprano como en el episodio 179 dijimos que “cuando algo es gratis en la Internet el producto eres tú”. Años después lo confirmaron repetidamente en el documental “The Social Dilemma” de Netflix. El ejemplo más descarado del uso de payola es YouTube. Hubo una época cuando colocabas un video en YouTube, ellos le colocaban anuncios encima y tú recibías una comisión. Luego eliminaron la monetización libre y la dejaron solamente al alcance de aquellos canales que tengan más de 1,000 suscriptores y superen las 4,000 horas de reproducción durante los 12 meses más recientes. Si tu canal no cumple con esos criterios no puedes monetizar tu contenido. Pero YouTube advierte que ellos le van a colocar anuncios comoquiera y a quedarse el dinero. Encima de eso las comisiones mensuales han ido menguando con los años hasta que hoy en día los productores de YouTube reciben meras migajas por millar de usuarios. Claro, todo esto tiene solución. Puedes colocar anuncios que promuevan tu contenido. Y claro, a mayor tráfico mayor la cantidad de visitas y de suscriptores, ya sea en YouTube o en la plataforma que sea. Pero esa es una idea fallida. ¿Sabes por qué? Porque estarías pagando por promover un contenido que reside en la plataforma de otro. No importa cuánto pagues por promover tu contenido —ya sea en YouTube o en la red social que sea— ellos van a continuar controlando el algoritmo. Eso quiere decir que te pudieran seguir cerrando el “grifo” para obligar a que pagues cada día más por el mismo tráfico. Además, con los nuevos algoritmos basados en inteligencia artificial —como el nuevo “Generative Ai” de Google— cada día es más el contenido que extraen de todo lo que publicas y que colocan en los SERPs como provisto por ellos. De ese modo tú haces todo el trabajo y ellos capturan todo el tráfico. Entonces, la pregunta es: ¿si las redes sociales y los motores de búsqueda controlan quién tiene acceso a tráfico en la Internet, y si la manera más fácil de tener acceso a ese tráfico es pagando, no haría más sentido dirigirlo hacia una propiedad de Internet que tú controles en lugar de enviarlo a una red social? Y extrapolando esa idea todavía más, no sería acaso sensato que esos anuncios estén en lugares donde los propios pulpos del tráfico no los pudieran manipular o hasta esconder. Pues de eso es que vamos a hablar hoy. Ya que vas a pagar “payola” comoquiera, cómo obtener el máximo de beneficio por tu dinero. OTROS EPISODIOS QUE TE PUEDEN INTERESAR:
I remember the first webinar I ever watched about starting an online business. It was about niche blogs, and how to find an SEO-friendly domain, research a few keywords, write a few blog posts, and insert Google Adsense ads and Amazon affiliate links. I was fascinated, and the whole process made perfect sense to me. Until I actually sat down to create the first blog, and I realized it's just not as easy as it looks on a webinar. That was just one of the harsh realities I've learned about running an online business. In the past 12 years I've learned many lessons that I'm sharing with you in this episode of the Tiny Course Empire podcast. Prefer a transcript? Here you go! What you'll learn in this episode: Why doing what you don't want to do is a good thing, and not just for your business. The outsourcing secret successful entrepreneurs don't tell you. Why you probably don't want to turn that hobby you love into a business. How to make better use of those courses you buy and the webinars you watch. What I learned about shiny new tools when I was just 17 (and it's just as true now as it was then). The one thing every business has in common, and it just might be the thing you dislike the most. Resources mentioned: Six-Figure Systems is my monthly program where we focus not just on learning new strategies, but also on implementing what you learn. Join us for a 7-day all access trial for just $7. Brooke Castillo says that life is always 50/50, and that we wouldn't want it any other way. The E-Myth Revisited by Michael Gerber reminds us that turning a hobby into a business might not be the best choice Lynn Terry teaches online marketing at ClickNewz.com
Monetizing your YouTube channel as a DJ requires a combination of building a strong brand, creating engaging content, and using different monetization strategies to generate revenue. Here are some tips to help you monetize your YouTube channel as a DJ with DJ Mekzy:Build a strong brand: Develop a unique brand that reflects your style and personality as a DJ. This can help you stand out from other DJs and attract a loyal fan base.Create engaging content: Produce high-quality videos that showcase your skills as a DJ, including live performances, tutorials, and behind-the-scenes footage. The more engaging and valuable your content is, the more likely people will want to watch and share it.Use ads and sponsorships: Monetize your videos by using ads and sponsorships. You can use Google AdSense to run ads on your videos and earn money based on the number of views and clicks. Additionally, you can partner with brands or companies for sponsorships and product placements in your videos.Sell merchandise: Create and sell merchandise like t-shirts, hats, or other branded items that your fans will want to buy. You can promote your merchandise through your YouTube channel and social media platforms.Offer premium content: Create exclusive or premium content, such as longer DJ sets or personalized mixes, and charge a fee for access.Use crowdfunding: Use crowdfunding platforms like Patreon or Kickstarter to fund new projects or ongoing content creation. Fans can pledge money to support your channel, and you can offer rewards or exclusive content in exchange for their support.Collaborate with other DJs: Collaborate with other DJs or music producers to create joint content or remixes. This can help you reach a wider audience and gain more exposure for your channel.By following these tips, you can monetize your YouTube channel as a DJ with DJ Mekzy and turn your passion for music into a profitable business.Support this podcast at — https://redcircle.com/we-create-the-vibes-podcast/exclusive-contentAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Get ready for an exciting episode of This Week in Niche Pursuits News, where we dive into a rollercoaster of digital marketing stories: From a $9 million employee theft scandal at Ezoic to weird niche websites and ad network comparisons. This episode is a treasure trove of interesting discussions to help close out another week! Things kick off with a jaw-dropping case of employee theft at the popular ad tech company Ezoic. The guys explore how one crafty individual tried to steal $9 million by altering the bank account details on the company's Google AdSense account. Luckily his plan was foiled, but not before Ezoic faced layoffs and cash flow struggles due to the incident. The conversation then switches gears to continue last week's discussion on Google's latest update, and how it's expanded to encompass all types of reviews, not just products. The conversation also gets into the fascinating world of weird niche sites, like sleepinginairports.net – a handy guide for those seeking a good night's sleep during layovers or as a hotel alternative. This unconventional site thrives on crowdsourced reviews and enjoys a whopping 46,000 organic traffic per month (according to Ahrefs, so for sure, their real traffic is much higher)! They also cover a mysterious travel site whose social media presence is suspended and a Lego site with untapped potential in email marketing and social media. And the question on every site owner's mind is: which ad network pays more? We tackle this question head-on, comparing Ezoic and Google AdSense, considering factors like ad setup, programmatic display, and site speed. Spencer then dishes on his latest side hustle – outsourcing product review videos for the Amazon Influencer program. With a team of five videographers and a video backlog, Spencer is set to rake in commissions from product page views. To wrap things up the guys discuss the perks of starting with ads on a website for that initial burst of energy and income and share an SEO tactic that involves hunting for unlinked mentions and requesting links. In a nutshell, this latest episode of This Week in Niche Pursuits News covers a wide array of digital marketing topics that are both fun and interesting for anyone involved with online businesses. Be sure to get more content like this in the Niche Pursuits Newsletter Right Here: https://www.nichepursuits.com/newsletter Want a Faster and Easier Way to Build Internal Links? Get $15 off Link Whisper with Discount Code "Podcast" on the Checkout Screen: https://www.nichepursuits.com/linkwhisper
If you're an influencer in the early stages of podcasting, tune in to keep your expectations in check, hear what's worked (& what hasn't) from those who have walked before you, and make the most of the opportunity ahead. Asha Christina (YouTube personality) speaks to her experience starting a podcast with an existing audience in the same niche on YouTube; Avery Warner (90 Day Fiance) shares what it was like starting a podcast outside of the "dating" niche for which she gained international attention; and Erin Dugan (producer) uncovers the expectations and realities of launching a podcast under major network PodcastOne alongside international celebrity Savannah Chrisley.GUESTS - LEARN MORE:Asha Christina is a self-love and relationships thought leader for women and host of the Quality Queen Control podcast. Prior to starting her podcast in May of 2020, Asha was most widely known for her YouTube channel, where she organically amassed 200k+ subscribers. In under two years, Asha had successfully migrated her audience over to her podcast, averaging 200k+ downloads per month. Podcast: https://apple.co/3Os0heKAvery Warner gained international attention and a following of 160k+ on Instagram following her stardom on Season 4 of 90 Day Fiance - Before the 90 Days. Two years after her season aired, Avery started her Chiller Queen Podcast, pivoting far from the relationship and dating niche for which she was known, and moving into exploring topics such as true crime, conspiracies, paranormal events, growing up in a cult, psychics, and more. Podcast: https://apple.co/3OpR9XRErin Dugan recently founded The Cast Collective studio, located in Nashville, where she has led creative projects for influencers like Libby Vincek (Survivor); Shannon Ford (Very Cavallari/Dear Media's Probably a Podcast); Victoria Fuller, Jen Saviano, and Christen Whitney (The Bachelor franchise); and more. Under PodcastOne, Erin produces the Unlocked with Savannah Chrisley podcast with reality TV star Savannah Chrisley, which reached Top 10 in All Categories. In its first 5 months, the podcast hit 45M+ video views on YouTube and 2M+ podcast downloads, bringing in thousands in monthly revenue from Google AdSense alone. Podcast: https://apple.co/3Y6qYIMCONNECT FURTHER WITH ANGIE:Podcast: https://www.yougetwhatimsaying.com Listen Early and Dynamic Ad-Free on Apple Podcasts: https://apple.co/44Y6rbYSocial Media: https://beacons.ai/theactualangie/socialmedia Contact: yougetwhatimsaying.podcast@gmail.com Monetize Your Podcast: https://beacons.ai/theactualangie/monetizeSupport the Show: https://www.buymeacoffee.com/yougetit/membershipADVERTISE ON THE SHOW: To inquire about host-read ads or to become the show's next Presenting Sponsor, please send an email to yougetwhatimsaying.podcast@gmail.com.EPISODE CREDITS:Podcast Logo: Abby MurdockPodcast Cover Photography: April Bowers CreativeBE ADVISED:Formerly titled Podfluencer Society (for before that, 4 Things For Your Podcast), episodes 1-114 share insights and strategies specifically for podcasters. As the podcast has undergone a complete rebrand, some links and information referenced in earlier episodes have likely changed. Please contact us at yougetwhatimsaying.podcast@gmail.com if you cannot find what you are looking for. The views and opinions expressed in each episode are those of the individual contributors and do not necessarily reflect those of the podcast host and team or the owner of this Intellectual Property. This podcast is not an authority of legal advice, and listeners are encouraged to seek professional counsel with regard to their brand, business, and otherwise. Many of the product and service promotions in each episode are under the negotiated terms of affiliate or sponsorship agreements. If a link is clicked and a purchase is made, an affiliate commission may be received. However, we recommend products or services that we personally endorse and believe may be beneficial to others. This information is disclosed in accordance with the Federal Trade Commission's 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising."
Lo que está cambiando el podcasting y el marketing digital:-Varias compañías de podcasting publicaron su informe de ingresos.-La empresa de alojamiento y monetización de pódcast Spreaker ahora es gratuita.-La escucha de pódcast en EE. UU. alcanza un nuevo récord.-¿Las agencias de medios no entienden totalmente la marca sónica?Nuevo pódcast-‘El podcast blanco del audio digital'.Pódcast recomendado“Marvel's Wolverine, La larga noche”. Después de una serie de muertes misteriosas en Burns, Alaska, agentes especiales llegan a investigar y muy pronto descubren que hay más de lo que a simple vista se puede ver. Es un pódcast de ficción de diez episodios que tiene a Logan como protagonista.
On today's episode, we speak with Drew Smith, founder, and editor of the Liberty Line of Philly Sports Block. In just a few short years, Drew has built an audience of tens of thousands by creating a media brand off of his t-shirt sales. He has deep experience in how creators monetize their audience through the sale of merchandise.Listen now as Drew walks us through the process of building his site, creating and selling third-party unlicensed sports t-shirts, and how the Eagles winning the Super Bowl in 2018 was his launching point.(1:05) Introduction to Drew Smith and how his previous work as Marketing Director for RushOrderTees and creating a successful hyper-specialized t-shirt for Eagles fans kicked off his entrepreneurial journey.(2:46) Creating shirts that went viral and shifting the business model to creating merch for bloggers and influencers.(5:04) Leveraging apparel to create cash flow for your business and building an audience for The Liberty Line.(6:47) How a viral shirt from topical events increased brand exposure.(12:22) Any niche can take an inside joke in their space or topical moments and turn them into revenue.(16:44) Creating products will bring an audience to a website but the quality of the content makes them come back and become engaged fans.(18:04) Don't be afraid to ask others for help and have people to share ideas with. Also how adding Google Adsense to your high-traffic website can boost your revenue.(24:02) Partnering with others to create exclusive products and leveraging audiences.(34:26) Work-life balance for a creator-based business and how going full-time right away is often not the best idea.(43:21) The constant creation of content means being willing to drop what you're doing and do something about what is topical to stay relevant.(49:41) Tools that Drew uses to build and grow The Liberty Line.(53:16) Closing thoughts with Kyle and Jason.Guest Social Media Links:The Liberty Line: (https://thelibertyline.com/)Twitter:( https://twitter.com/LibertyLinePHL)Instagram:( https://www.instagram.com/libertylinephl/)Facebook:( https://www.facebook.com/LibertyLinePHL)Twitch TV:( https://www.twitch.tv/thelibertyline)Follow us on Social Media -----Listen on your favorite streaming platform: (https://link.chtbl.com/MonetizeMedia) Twitter: (https://twitter.com/monetizemediahq) TikTok: (https://www.tiktok.com/@monetizemediahq) Linkedin: (https://www.linkedin.com/company/monetize-media/) YouTube: (https://www.youtube.com/channel/UC8a20ZIx7dx1reQ4pdvCBPA) Check out our website for more information: (
On today's episode, we speak with Bobby Trosset, a former NFL pregame radio host who is striking out on his own with a podcast and Youtube channel. Bobby is making the difficult switch from mainstream media to the creator world, with content focused on the Baltimore Ravens, Baltimore sports, and beyond. He has multiple podcasts, two Youtube channels, and is thinking about ways to grow his audience to other mediums. In just under nine months, he's amassed thousands of subscribers and tens of thousands of downloads. Yet the crushing doubt of building your own content business in a niche market hangs over his head daily. Listen now as Bobby walks us through how he made the switch from radio to podcasting and YouTube out of necessity, what he is doing to build and monetize his audience, and how the constant uncertainty of future success is with him. Onto the interview…. ● (0:09) Episode introduction with Kyle and Jason. ● (1:07) Bobby's welcome and introduction.● (2:10) Bobby's background, the creation of Raven's Fault, and the importance of Youtube. ● (7:51) How Bobby handled the new media landscape and decided to create his own content. ● (15:10) The daunting nature of striking out on your own and handling online criticism.● (21:00) The breakdown of Bobby's business plan and how he approaches monetization and revenue streams.● (27:50) A deeper look at advertising with BlueWire, Google AdSense, and YouTube. ● (34:03) Working in new media and the pros/cons of working on sports content creation without press credentials. ● (39:00) Content differentiation ideas and ditching old media formality. ● (42:05) Expanding business opportunities, online sports betting, and using morals to guide business. ● (48:38) Balancing content creation, audience growth, and monetization.● (52:08) Tools of the trade talk and growth opportunities. ● (58:15) Where to find Bobby online. ● (59:11) Closing thoughts with Kyle and Jason.Tools and Resources Mentioned: Youtube Creator Hub Podcast: https://podcasts.apple.com/us/podcast/youtube-creators-hub/id843566797 BlueWire Podcasts: https://www.bluewirepods.com/Google AdSense: https://www.google.com/adsense/start/Riverside Podcast Recording: https://riverside.fm/homepageBeehiiv Newsletter Platform: https://www.beehiiv.com/ Rev Automated Transcription: https://www.rev.com/services/auto-audio-transcription Guest Social Media Links --- Ravens Vault Podcast :(https://podcasts.apple.com/us/podcast/the-ravens-vault/id1635836036)Ravens Vault YouTube Channel : (https://www.youtube.com/channel/UC1AKndUf3naqrtbFDcN1gDg)Bobby Trosset:Website: (https://linktr.ee/bobbytrosset)Twitter: (
In this episode, Beth Scheer, Talent Partner at Homebrew, talks about the drivers for hiring a coach and what to look out for while going through the process. Key Takeaways: Why would a founder should look for a coach? Big commitment to hiring a coach Make sure you do a reference check with people the coach has worked with the right questions are a must. A good coach for someone else might not be the right coach for you How to interview a coach Costs of hiring a coach Red Flags to look for when hiring a coach About today's guest: At Homebrew, Beth Scheer works closely with founders and their executive team members as an advisor on all things Talent-related, including recruiting, diversity & inclusion, compensation, onboarding, and people operations. Before Homebrew, Beth spent 5+ years at Salesforce leading executive search, sales leadership search and the sales growth/professional services recruiting teams. Before that, she built the Google AdSense team in 2003 and then spent 6 years hiring for business operations, corporate communications, and various engineering teams. She is a proud graduate of The Colorado College with a BS in Psychology. https://www.linkedin.com/in/bethscheer/ beth@homebrew.co ___ Thank you so much for checking out this episode of The Tech Trek, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests? Please message me at https://www.linkedin.com/in/amirbormand (Amir Bormand)
Chris Parker is the founder of WhatIsMyIPaddress.com, the number one IP address lookup website. Chris started the website in January 2000 and for the first five years his revenue didn't even cover his internet bill. In 2014, Chris was laid off from his corporate job and was faced with the scary opportunity to make his website a full-time business. Chris Parker Vroom Veer Stories Grew up in Tustin California and has lived within a 10 mile radius of home his whole life Went to Cal State Fullerton for about a year; not a great student--then attend the local community college for more years he wants to disclose; quit school when he realized his part time job was paying more than the what college grads were earning Worked in sales for a company called "Club Mac" selling Macintosh computers via a mail order catalog and in-person; not a great salesman; was talking people into buying cheaper computers Worked on IT and web stuff near the birth of the internet for Club Mac; at that time there wasn't an easy way to figure out your public IP address He build a server out of spare parts and set it up in his house to run the first version of WhatismyIPaddress.com. No graphics, it just returned the IP address. At some point he was getting alerts telling him the harddrive was nearly full; the traffic logs were building up Verion .1: He offered to answer questions via email; he just the email traffic to generate a Frequently Asked Questions (FAQ) list A few years later Google Adsense call along; and decided to throw up a banner ad and it made a little bit of money; neat! Warning! Geek Content!: The server ran in Chris's house from pre-google time until 2014; he had a half rack server with UPS's in case of power outage and a very high electric bill trying to keep the house under 90 degrees and he was spending way too much money paying for internet bandwidth His wife points out that it might not be the best idea to have all the business assets in their house; a break in would effectively kill the business Strange as it sounds there were about six co-location facilities in Tustin within 1 mile of his house; bring your server there and plug it into their facility and you rent power, bandwidth, HVAC, and security. In 2014 he was selling life insurance online; the economy tanked and people were getting laid off work; Chris was asked to transition from full to part time Went back to full time for a while; but eventually he was let go entirely; Should I get a job or work full time on the website business? They experimented with working on the business for six months; which made enough to keep going---he has now replaced his job income with the business revenue Version .2 of the website, no graphics, bar on top, bar on bottom and a wall of text Version .3 he hired a graphic designer to make the site look much better GRAPHICS! Version .4 hired a couple of content writers to create blog content related to online security and internet privacy How important it is to use a VPN (virtual private network) to secure your internet connection; most businesses out there don't know how to secure their network and the ones that do are breached all the time On a trip to Taiwan from Sinapore his wife gets sick; in the airport in Taiwan there was a medical station using an infrared "fever detector" to find sick people and put them in quarantine; Chris noticed in time to put himself the detector and his wife and they made through Barf bag of the month club for dog owners who get queasy when picking up dog poo; start collecting those barf bags from airplanes You can verify that your VPN is working on Chris's site; you can also verify the country of an IP address; helpful to verify an internet order before you ship Chris Parker Connections Website - WhatismyIPAddress.com