Podcasts about 5tb

  • 63PODCASTS
  • 70EPISODES
  • 58mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about 5tb

Latest podcast episodes about 5tb

Table 1 Podcast
You Thought It Was Fake… Here's $50 Million Worth of Footage

Table 1 Podcast

Play Episode Listen Later May 10, 2025 37:12


Play with us online at Phenom Poker: https://play.phenompoker.com/registerPlay with us at Table 1: https://table1.vegasEpisode 77: Rory Young is back.After going viral for betting $100,000 that Rich Alati could survive in a pitch-black bathroom for 30 days, a wave of skeptics called BS. Rumors swirled. Reddit threads accused him of staging it. Some said it never happened. Rory said nothing — until now.In this follow-up episode, Rory brings the receipts: 5+ terabytes of time-stamped, uncut footage, legal contracts, interviews, and inside access to one of the wildest prop bets in poker history.This isn't a story. It's evidence.Show Notes0:00 – The skeptics, the rumors, and the $50M challenge1:28 – Why people thought the $100K bet was fake3:02 – The "drive me there now" Len Ashby story5:06 – Rory explains how they sealed off the bathroom6:50 – Footage folder reveal: 5TB of raw video9:04 – Inside the isolation: food drops, cameras, pushups11:30 – Was Rich hallucinating? Early footage breakdown14:00 – The motorcycle helmet & blackout structure explained17:08 – Did Rory scam anyone? Here's how the side action worked20:19 – Rich's video diary: “I've got the hang of it now”22:25 – Buyout negotiations + audio clip of the offer25:03 – Why Rory didn't sell this to Netflix (yet)27:14 – Footage of Rich emerging after 19 days29:55 – Analyzing the “fake” theories with receipts33:20 – What's next? More prop bets? Submarines?35:40 – Rory's take on the internet doubters37:07 – Final word + how to view the raw footage (if you're serious)Table 1 Links:table1.vegasTwitter/XInstagramYouTubeArt's Links:Twitter/XSlotMaps – Learn to Beat Slot Machines (seriously)PokerHQ – Your Poker Network, UpgradedJustin's Links:Twitter/XPhenom PokerAbout Table 1 Podcast and YouTube Channel:Welcome to the Table 1 Podcast, your ultimate destination for everything high-stakes poker. Hosted by poker enthusiasts Art Parmann and Justin Young, our podcast brings you in-depth interviews with the biggest names in poker, captivating stories from the poker world, and insights into the game's strategies and trends.

Thought Behind Things
"Is Your Business Data Safe? The Cybersecurity Threats Pakistanis NEED to Know” Ft Muhammad Zayn | TBT 433

Thought Behind Things

Play Episode Listen Later Mar 7, 2025 70:10


Guest Introduction: Joining us today is Muhammad Zayn, a seasoned expert in data storage and security solutions. With a proven track record of helping over 300 businesses streamline their data workflows, Muhammad brings a wealth of practical experience to the table. Currently serving as Technical Sales Head at Al Madina Enterprises, he specializes in designing and implementing robust NAS/SAN solutions for businesses of all sizes, ranging from 5TB to petabyte-scale deployments. His expertise encompasses system design, data security and backups, active directory integration, remote file sharing, cloud syncing, and virtual machine applications, ensuring workflow consistency and data security.Do not forget to subscribe and press the bell icon to catch on to some amazing conversations coming your way!#thoughtbehindthings #muzamilhasan #muhammadzayn #datasecuritySocials:TBT's Official Instagram: https://www.instagram.com/thoughtbehindthings Muzamil's Instagram: https://www.instagram.com/muzamilhasan Muzamil's LinkedIn: https://www.linkedin.com/in/muzamilhasan Zayn's LinkedIn: https://www.linkedin.com/in/muhammad-zayn-cissp-4032b91a1/ Al Madina Ent. Website: https://almadinapk.com/Podcast Links:Spotify: https://spoti.fi/3z1cE7F Google Podcast: https://bit.ly/2S84VEd Apple Podcast: https://apple.co/3cgIkf

Late Night Linux All Episodes
Hybrid Cloud Show – Episode 24

Late Night Linux All Episodes

Play Episode Listen Later Feb 21, 2025 23:53


The best ways to implement geo-redundancy for containers and VMs using load balancers and Kubernetes, and moving 5TB of storage to the cloud.       Send your questions and feedback to show@hybridcloudshow.com         SysCloud Over 2,000 IT admins already trust SysCloud to protect their SaaS data. Head to SysCloud.com for a... Read More

Hybrid Cloud Show
Hybrid Cloud Show – Episode 24

Hybrid Cloud Show

Play Episode Listen Later Feb 21, 2025 23:53


The best ways to implement geo-redundancy for containers and VMs using load balancers and Kubernetes, and moving 5TB of storage to the cloud.       Send your questions and feedback to show@hybridcloudshow.com         SysCloud Over 2,000 IT admins already trust SysCloud to protect their SaaS data. Head to SysCloud.com for a … Continue reading "Hybrid Cloud Show – Episode 24"

Irish Tech News Audio Articles
Keep "iloveyou" for your Valentine, Not Your Password!

Irish Tech News Audio Articles

Play Episode Listen Later Feb 14, 2025 3:21


"iloveyou" was found among the world's most common passwords - it was used 197,880 times last year, according to the latest report by NordPass. Experts say this password can be cracked in less than a second. While affectionate words are entering the list of the most common passwords annually, cybersecurity experts say their use is a horrible idea. Researchers say that in a 2.5TB database of leaked credentials that they analyzed, there were also other cute phrases people use to secure their online accounts. "princess" ranks among the top 200 most common passwords in the whole world, "valentina" - in Chile, and "sunshine" - in the United States. French people love "loulou" and "doudou" for their passwords - these words are used to express affection for someone. A dangerous habit "While we all know that love might have no limits, the words we use to express our feelings should - especially when it comes to passwords. Being creatures of habit, we then put those words in our passwords - if someone calls their partner "love" daily, it is only natural this word might be on top of their mind when setting online credentials," says Karolis Arbaciauskas, head of business product at NordPass. Every year, NordPass reveals the world's 200 most-used passwords. This year, the company also showcased how they differ among 44 countries worldwide and what kind of corporate passwords people use for their work accounts. "As many as 70% of the passwords in the past year's global list can be cracked in less than a second, and this is highly alarming. With leaked credentials, threat actors can get you locked out of your important accounts, steal your sensitive data, and sell it on the dark web, risking even your physical privacy. And this is only one of the scenarios," says Arbaciauskas. How to improve your account security Besides avoiding loving words in passwords, Arbaciauskas has other recommendations that could easily increase the strength of your online accounts. Create long passwords and avoid dictionary words. They should consist of at least 20 random characters, namely upper - and lowercase letters, numbers, and special symbols. Add multi-factor authentication. Anything - additional confirmation via email or phone, physical security keys, or biometric confirmation - is better than a password alone. Try passkeys wherever possible. Most modern websites allow logging in with passkeys, a new and alternative method of online authentication. This technology is currently considered the most promising alternative to passwords and is greatly supported by most tech giants, including Apple, Microsoft, and Google. Research methodology: The list of passwords was compiled in partnership with NordStellar. They evaluated a 2.5TB database extracted from various publicly available sources, including those on the dark web. No personal data was acquired or purchased by NordPass to conduct this study. Researchers classified the data into various verticals, which allowed them to perform a statistical analysis based on countries. NordPass exclusively received only statistical information from the researchers, which gives no reference to internet users' personal data.

Apfelplausch
Apfelplausch #367: Kurztest Mac Mini | iTV-Gerüchte wieder da | Kommt der „Apple Ring“? | Apple und die Zölle

Apfelplausch

Play Episode Listen Later Nov 23, 2024 56:51


Danke fürs Zuhören trotz unserer merkwürdigen Erscheinungstermine. Eure Treue motiviert uns, weiterzumachen. Viel Spaß mit dem Rückblick auf die Apple-Woche von Lukas und Roman. Zur Apfelplausch-App Folge direkt bei Apple anhören Folge direkt bei Spotify anhören Kapitelmarken 00:00:00: Intro, Mail von Hörern und Redaktion (Marcos Mac Mini Test) 00:17:00: iOS 18: Neustart als Sicherheitsfeature und MBP mit Quantum Docs: Young 00:24:15: iTV lebt wieder als Planspiel: Gurman 00:38:35: Oura-CEO glaubt nicht an Apple Ring und AirTag 2 mit größerer Reichweite 2025 00:50:35: Ruhe vor dem Sturm: Apples Angst vor den Zöllen UNSER SPONSOR PCLOUD: Das Schweizer Unternehmen pCloud bietet anlässlich des Black Friday einen Deal, der dieses Problem löst. Sichert euch vom 13.11. bis zum 01.12.2024 60 % Rabatt auf das exklusive und limitierte 3-in-1-Bundle (Lifetime-Cloud mit 5TB, pCloud Encryption und der Passwort-Manager pCloud Pass) oder den Lifetime-Speicher. -> Black-Friday-Deal von pCloud: Bis zu 60% Rabatt sichern Apfelplausch hören Kein Apfelplausch mehr verpassen: ladet unsere neue App → zur App Bei Apple Bei Spotify Bei Radio.de Apfelplausch unterstützen Bei Patreon (danke!) Als Hörerpost im Plausch sein? …dann schreibt uns eure Fragen, Anmerkungen, Ideen und Erfahrungen an folgende Adressen: E-Mail: apfelplausch@apfellike.com | roman / lukas@apfelplausch.de Twitter: Apfelplausch folgen (oder Roman und Lukas) Instagram: Apfelplausch folgen Webseite: apfelplausch.de

Apfelplausch
Apfelplausch #366: Neue „Home Devices“ von Apple | iPhone-17-Gerüchte | Vision Pro 2

Apfelplausch

Play Episode Listen Later Nov 17, 2024 65:59


Danke fürs Zuhören trotz unserer merkwürdigen Erscheinungstermine. Eure Treue motiviert uns, weiterzumachen. Viel Spaß mit dem Rückblick auf die Apple-Woche von Lukas und Roman. Zur Apfelplausch-App Folge direkt bei Apple anhören Folge direkt bei Spotify anhören Kapitelmarken 00:00:00: Intro & Hörermails 00:09:00: Neue Produktkategorie bei Apple: Smart Home-Display und Kamera mit AI? 00:29:30: iPhone 17 Slim, Pro, SE 2 – Gerüchte 00:44:00: EU vs. Apple – Updates 00:48:45: Vision Pro 2 in einem Jahr: Kräftiges CPU-Update | AirPods mit Health-Features UNSER SPONSOR PCLOUD: Das Schweizer Unternehmen pCloud bietet anlässlich des Black Friday einen Deal, der dieses Problem löst. Sichert euch vom 13.11. bis zum 01.12.2024 60 % Rabatt auf das exklusive und limitierte 3-in-1-Bundle (Lifetime-Cloud mit 5TB, pCloud Encryption und der Passwort-Manager pCloud Pass) oder den Lifetime-Speicher. -> Black-Friday-Deal von pCloud: Bis zu 60% Rabatt sichern Apfelplausch hören Kein Apfelplausch mehr verpassen: ladet unsere neue App → zur App Bei Apple Bei Spotify Bei Radio.de Apfelplausch unterstützen Bei Patreon (danke!) Als Hörerpost im Plausch sein? …dann schreibt uns eure Fragen, Anmerkungen, Ideen und Erfahrungen an folgende Adressen: E-Mail: apfelplausch@apfellike.com | roman / lukas@apfelplausch.de Twitter: Apfelplausch folgen (oder Roman und Lukas) Instagram: Apfelplausch folgen Webseite: apfelplausch.de

The Tech Addicts Podcast
Sunday 26th May - Something has come to the Surface

The Tech Addicts Podcast

Play Episode Listen Later May 26, 2024 110:24


Gareth and Ted chat about being a YouTuber, Logitech MX Keys Mechanical, Microsoft's plans for the CoPilot + PCs at the Surface event, screenshooting your personal information, UGREEN's latest Portable station, Motorola Moto G85, 6TB 2.5-inch portable hard drives and the HMD T21. With Gareth Myles and Ted Salmon Join us on Mewe RSS Link: https://techaddicts.libsyn.com/rss iTunes | Google Podcasts | Stitcher | Tunein | Spotify  Amazon | Pocket Casts | Castbox | PodHubUK Feedback, Fallout and Contributions Ian Barton on UptimeKuma Worried that your emby or Plex server has stopped working? UptimeKuma is a monitoring application which you can set up to watch over your computers and services and send you warnings if it can't assess the app or server. Monitoring uptime for HTTP(s) / TCP / HTTP(s) Keyword / HTTP(s) Json Query / Ping / DNS Record / Push / Steam Game Server / Docker Containers Fancy, Reactive, Fast UI/UX Notifications via Telegram, Discord, Gotify, Slack, Pushover, Email (SMTP), and 90+ notification services 20-second intervals, Multi Languages, Multiple status pages, Map status pages to specific domains, Ping chart, Certificate info, Proxy support, 2FA support Banters: Knocking out a Quick Bant A feature on how to do simple desk-down YouTube review videos. Ted's Logitech MX Keys Mechanical News, Mews and Views Microsoft Surface event: the 6 biggest announcements (we'll come to Recall later in the show) Hardline on the hardware Miss the feel of classic audio kit? Tivoli Songbook and Songbook Max bring retro knobs and sliders to modern Bluetooth speakers and DAB radios UGREEN's latest Portable Charging solution debuts to deliver 48,000mAh of power at up to 300W It took 8 years to launch a 6TB 2.5-inch portable HDD up from 5TB - but at least it is not that expensive - at £162.99 HMD T21 tablet debuts - it's a Nokia T21 clone! - £229 The Wearables Watch Phone Zone Mobile industry is quietly preparing for the biggest change to your smartphone in a decade - iSIM will hasten the end of SIM cards and allow networks to preload plans on devices Motorola Moto G85 shown in first render images with modernised design and new display - Moto G84 Specs (£194 at AmazonUK) The Name of the Game Ayn's new gaming handheld looks like a PSP, and it might just fill the hole in your heart left by Sony's best portable - A Vita moreso than the PSP Anbernic announces a new 1:1 handheld that will let you relive your Game Boy glory days Microsoft looking to purchase Steam Sony's Futuristic Gear - including a Gaming Controller - YouTube Sony Showcase Video Flap your trap about an App 'The Entire History of You': How a lone developer created free app that records everything you do on your PC — and allows you to rewind and search for anything and then there's New Windows AI feature records everything you've done on your PC all copying Apple's Time Capsule? Google Gallows & Chrome Coroner New 'Add to Chromebook' badge and tabbed PWAs are coming in the meantime Turn Websites into Desktop Apps Yourself Instead of Waiting for an App - See the badge at this Pixlr example website You can now hum to find a song on YouTube Music for Android - Users can also sing the tune or even play it on an instrument Gmail moving low-priority emails to refreshed 'Updates' inbox on Android, iOS Google Set to Make Its Largest Acquisition Ever, Threatening Microsoft - What HubSpot Does Now - Another View Bargain Basement: Best UK deals and tech on sale we have spotted Sony WH-1000XM5 £254 from £379 Verbatim GNC-200 200W GaN Charger with 2 x USB-C PD 100 W / 1 x USB-C PD 65 W / 1 x USB QC 3.0 - £65.83 Kindles all on Sale again - the top-end Scribe with over £100 off so £304 UGREEN UK to European Plug Adapter PD 30W Travel Adapter with USB C GaN Fast 4-in-1 £13.99 Or £12.99 from their website but P&P will apply. Anker Prime 240W Desktop GaN Charger £129 from £199 LISEN 2 in 1 Magsafe Charger Stand for iPhone Foldable Wireless Charging Pad - £16.49 Logitech G PRO X Wireless Gaming Mouse £96 from £139 Main Show URL: http://www.techaddicts.uk | PodHubUK Contact:: gareth@techaddicts.uk | @techaddictsuk Gareth - @garethmyles | Mastodon | garethmyles.com | Gareth's Ko-Fi Ted - tedsalmon.com | Ted's PayPal | Mastodon | Ted's Amazon YouTube: Tech Addicts

OneDigital
Podcast ONE: 8 de diciembre de 2023

OneDigital

Play Episode Listen Later Dec 9, 2023 123:45


Podcast ONE: 8 de diciembre de 2023 Podcast ONE: The Game Awards 2023. Revisión de videojuegos Terra Alia, Kingdoms & Castles, Alien Death Mob, Chessarama, Extremely Powerful Capybaras, Ruinarch, MicroSD SanDisk de 1.5TB, Zuum Sens M1, Santa Claus Digital 2023, Google Gemini, Sismo de 5.7 Puebla CDMX, SwifDoo con IA, Unboxing Amazfit Active, Las contraseñas […] El cargo Podcast ONE: 8 de diciembre de 2023 apareció primero en OneDigital.

PetaPixel Photography Podcast
Ep. 415: Definitely Not the Sign of a Pro – and more

PetaPixel Photography Podcast

Play Episode Listen Later Oct 27, 2023 50:19


Episode 415 of the Lens Shark Photography Podcast In This Episode If you subscribe to the Lens Shark Photography Podcast, please take a moment to rate and review us to help make it easier for others to discover the show. Sponsors: - Build Your Legacy with Fujifilm - Shop with the legends at RobertsCamera.com, and unload your gear with UsedPhotoPro.com - Nanlite's new FC-500B and FC300B, and PavoTube deals! - Get 20% OFF with code SHARKY20 at BenroUSA.com - More mostly 20% OFF codes at LensShark.com/deals. Stories: If you're going to be a pro, you should do this. (#) Canon finally ditches EOS M. (#) Wacom expands the line. (#) Elinchrom's THREE is just right. (#) Fotodiox has a versatile new modifier. (#) SanDisk's new 1.5TB card. (#) Nomatic and Peter McKinnon have 3 new bags. (#) Giggster makes another acquisition. (#) Leica's two new lenses. (#)   Connect With Us Thank you for listening to the Lens Shark Photography Podcast! Connect with me, Sharky James on Twitter, Instagram Vero, and Facebook (all @LensShark).

Light Reading Podcasts
OpenVault spots rise of broadband's 'extreme power users'

Light Reading Podcasts

Play Episode Listen Later Oct 25, 2023 9:27


OpenVault CEO Mark Trudeau discusses the emergence of 'extreme power users' who consume more than 5TB of data per month and how operators can manage this small but growing group of customers. Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jul 26, 2023 54:31


FlashAttention was first published by Tri Dao in May 2022 and it had a deep impact in the large language models space. Most open models you've heard of (RedPajama, MPT, LLaMA, Falcon, etc) all leverage it for faster inference. Tri came on the podcast to chat about FlashAttention, the newly released FlashAttention-2, the research process at Hazy Lab, and more. This is the first episode of our “Papers Explained” series, which will cover some of the foundational research in this space. Our Discord also hosts a weekly Paper Club, which you can signup for here. How does FlashAttention work?The paper is titled “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. There are a couple keywords to call out:* “Memory Efficient”: standard attention memory usage is quadratic with sequence length (i.e. O(N^2)). FlashAttention is sub-quadratic at O(N). * “Exact”: the opposite of “exact” in this case is “sparse”, as in “sparse networks” (see our episode with Jonathan Frankle for more). This means that you're not giving up any precision.* The “IO” in “IO-Awareness” stands for “Input/Output” and hints at a write/read related bottleneck. Before we dive in, look at this simple GPU architecture diagram:The GPU has access to three memory stores at runtime:* SRAM: this is on-chip memory co-located with the actual execution core. It's limited in size (~20MB on an A100 card) but extremely fast (19TB/s total bandwidth)* HBM: this is off-chip but on-card memory, meaning it's in the GPU but not co-located with the core itself. An A100 has 40GB of HBM, but only a 1.5TB/s bandwidth. * DRAM: this is your traditional CPU RAM. You can have TBs of this, but you can only get ~12.8GB/s bandwidth, which is way too slow.Now that you know what HBM is, look at how the standard Attention algorithm is implemented:As you can see, all 3 steps include a “write X to HBM” step and a “read from HBM” step. The core idea behind FlashAttention boils down to this: instead of storing each intermediate result, why don't we use kernel fusion and run every operation in a single kernel in order to avoid memory read/write overhead? (We also talked about kernel fusion in our episode with George Hotz and how PyTorch / tinygrad take different approaches here)The result is much faster, but much harder to read:As you can see, FlashAttention is a very meaningful speed improvement on traditional Attention, and it's easy to understand why it's becoming the standard for most models.This should be enough of a primer before you dive into our episode! We talked about FlashAttention-2, how Hazy Research Group works, and some of the research being done in Transformer alternatives.Show Notes:* FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness (arXiv)* FlashAttention-2* Together AI* From Deep Learning to Long Learning* The Hardware Lottery by Sara Hooker* Hazy Research* Is Attention All You Need?* Nvidia CUTLASS 3* SRAM scaling slows* Transformer alternatives:* S4* Hyena* Recurrent Neural Networks (RNNs)Timestamps:* Tri's background [00:00:00]* FlashAttention's deep dive [00:02:18]* How the Hazy Research group collaborates across theory, systems, and applications [00:17:21]* Evaluating models beyond raw performance [00:25:00]* FlashAttention-2 [00:27:00]* CUDA and The Hardware Lottery [00:30:00]* Researching in a fast-changing market [00:35:00]* Promising transformer alternatives like state space models and RNNs [00:37:30]* The spectrum of openness in AI models [00:43:00]* Practical impact of models like LLAMA2 despite restrictions [00:47:12]* Incentives for releasing open training datasets [00:49:43]* Lightning Round [00:53:22]Transcript:Alessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. Today we have no Swyx, because he's in Singapore, so it's a one-on-one discussion with Tri Dao. Welcome! [00:00:24]Tri: Hi everyone. I'm Tri Dao, excited to be here. [00:00:27]Alessio: Tri just completed his PhD at Stanford a month ago. You might not remember his name, but he's one of the main authors in the FlashAttention paper, which is one of the seminal work in the Transformers era. He's got a lot of interest from efficient transformer training and inference, long range sequence model, a lot of interesting stuff. And now you're going to be an assistant professor in CS at Princeton next year. [00:00:51]Tri: Yeah, that's right. [00:00:52]Alessio: Yeah. And in the meantime, just to get, you know, a low pressure thing, you're Chief Scientist at Together as well, which is the company behind RedPajama. [00:01:01]Tri: Yeah. So I just joined this week actually, and it's been really exciting. [00:01:04]Alessio: So what's something that is not on the internet that people should know about you? [00:01:09]Tri: Let's see. When I started college, I was going to be an economist, so I was fully on board. I was going to major in economics, but the first week I was at Stanford undergrad, I took a few math classes and I immediately decided that I was going to be a math major. And that kind of changed the course of my career. So now I'm doing math, computer science, AI research. [00:01:32]Alessio: I had a similar thing. I started with physics and then I took like a programming course and I was like, I got to do computer science. I don't want to do physics. So FlashAttention is definitely, everybody's using this. Everybody loves it. You just released FlashAttention 2 last week. [00:01:48]Tri: Yeah. Early this week on Monday. Yeah. [00:01:53]Alessio: You know, AI time. Things move fast. So maybe let's run through some of the FlashAttention highlights, some of the innovation there, and then we can dive into FlashAttention 2. So the core improvement in FlashAttention is that traditional attention is a quadratic sequence length. And to the two, FlashAttention is linear, which obviously helps with scaling some of these models. [00:02:18]Tri: There are two factors there. So of course the goal has been to make attention go faster or more memory efficient. And ever since attention became popular in 2017 with the Transformer paper, lots and lots of folks have been working on this. And a lot of approaches has been focusing on approximating attention. The goal is you want to scale to longer sequences. There are tons of applications where you want to do that. But scaling to longer sequences is difficult because attention scales quadratically in sequence length on both runtime and memory, as you mentioned. So instead of trying to approximate attention, we were trying to figure out, can we do the same computation and maybe be more memory efficient? So in the end, we ended up being the memory is linear in sequence length. In terms of computation, it's still quadratic, but we managed to make it much more hardware friendly. And as a result, we do get wall clock speed up on the order of 2 to 4x, which really helps because that just means that you'll be able to train with 2 to 4x longer sequence length for the same cost without doing any approximations. As a result, lots of folks have been using this. The thing is available in a lot of libraries that do language model training or fine tuning. [00:03:32]Alessio: And the approximation thing is important because this is an exact thing versus a sparse. So maybe explain a little bit the difference there. [00:03:40]Tri: For sure. So in addition, essentially you compute pairwise similarity between every single element in a sequence against each other. So there's been other approaches where instead of doing all that pairwise computation, you only compute similarity for some pairs of elements in the sequence. So you don't do quadratic number of comparison. And this can be seen as some form of sparsity. Essentially you're ignoring some of the elements. When you write down the matrix, you essentially say, OK, I'm going to pretend there's zero. So that has some benefits in terms of runtime and memory. But the trade-off is that it tends to do worse in terms of quality because you're essentially approximating or ignoring some elements. And I personally have worked on this as well for a few years. But when we talk to practitioners who actually train models, especially at large scale, they say, tend not to use these approximate attention methods. Because it turns out, this was surprising to me at the time, was that these approximation methods, even though they perform fewer computation, they tend to not be faster in walk-on time. So this was pretty surprising because back then, I think my background was more on the theoretical side. So I was thinking of, oh, how many flops or floating point operations are you performing? And hopefully that correlates well with walk-on time. But I realized that I was missing a bunch of ideas from the system side where flops or floating point operations don't necessarily correlate with runtime. There are other factors like memory reading and writing, parallelism, and so on. So I learned a ton from just talking to systems people because they kind of figured this stuff out a while ago. So that was really eye-opening. And then we ended up focusing a lot more on memory reading and writing because that turned out to be the majority of the time when you're doing attention is reading and writing memory. [00:05:34]Alessio: Yeah, the I.O. awareness is probably one of the biggest innovations here. And the idea behind it is, like you mentioned, the FLOPS growth of the cards have been going up, but the memory bandwidth, not as much. So I think maybe that was one of the assumptions that the original attention paper had. So talk a bit about how that came to be as an idea. It's one of those things that like in insight, it's like, obviously, why are we like rewriting to like HBM every time, you know, and like once you change it, it's clear. But what was that discovery process? [00:06:08]Tri: Yeah, in hindsight, a lot of the ideas have already been there in the literature. And I would say is it was somehow at the intersection of both machine learning and systems. And you kind of needed ideas from both sides. So on one hand, on the system side, so lots of systems folks have known that, oh, you know, kernel fusion is great. Kernel fusion just means that instead of performing, you know, loading the same element, instead of performing an operation, write it down, load it back up and perform the second operation, you just load it once, perform two operations and then write it down again. So that saves you kind of memory read and write in the middle there. So kernel fusion has been a classic. There's been other techniques from the system side, like tiling, where you perform things in the form of computations in block, again, so that you can load it into a really fast memory. Think of it as a cache. And this is, again, classical computer science ideas, right? You want to use the cache. So the system folks have been thinking about these ideas for a long time, and they apply to attention as well. But there were certain things in attention that made it difficult to do a complete kernel fusion. One of which is there is this softmax operation in the middle, which requires you to essentially sum across the row of the attention matrix. So it makes it difficult to kind of break it, because there's this dependency. So it makes it difficult to break things into a block. So on the system side, people have been thinking about these ideas, but it's been difficult to kind of do kernel fusion for the entire operation. On the machine learning side, people have been thinking more algorithmically. They say, okay, either we can approximate attention, or there's this trick called the online softmax trick, which says that because of softmax, the way it's written mathematically, you can actually break it up into smaller pieces, do some rescaling, and still get the right answer. So this online softmax trick has been around for a while. I think there was a paper from NVIDIA folks back in 2018 about this. And then there was a paper from Google. So Marcus, Rob, and Stats wrote a paper late 2021 on using this online softmax trick to break attention up into smaller pieces. So a lot of the ideas were already there. But it turns out, you kind of need to combine ideas from both sides. So you need to understand that, hey, we want to do kernel fusion to reduce memory written writes. But we also need this online softmax trick to be able to break the softmax into smaller pieces so that a lot of the systems tricks kind of carry through. We saw that, and it was kind of a natural idea that we ended up using ideas from both sides, and it ended up working pretty well. Yeah. [00:08:57]Alessio: Are there any downsides to kernel fusion? If I think about databases and the reasons why we have atomic operations, you know, it's like, you have observability and fallback in between them. How does that work with attention? Is there anything that we lose by fusing the operations? [00:09:13]Tri: Yeah, I think mostly on the practical side is that you lose a little bit of flexibility in the sense that, hey, now you have, for example, faster attention, it's just a subroutine that you would call to do attention. But as a researcher, let's say you don't want that exact thing, right? You don't want just attention, let's say you want some modification to attention. You want to do, hey, I'm going to multiply the query and key, but then I'm going to do this extra thing before I carry on. So kernel fusion just means that, okay, we have a subroutine that does the entire thing. But if you want to experiment with things, you won't be able to use that fused kernel. And the answer is, can we have a compiler that then automatically does a lot of this kernel fusion? Lots of compiler folks are thinking about this, either with a new language or you can embed it in PyTorch. PyTorch folks have been working on this as well. So if you write just your code in PyTorch and they can capture the graph, can they generate code that will fuse everything together? That's still ongoing, and it works for some cases. But for attention, because of this kind of softmax rewriting stuff, it's been a little bit more difficult. So maybe in a year or two, we'll have compilers that are able to do a lot of these optimizations for you. And you don't have to, for example, spend a couple months writing CUDA to get this stuff to work. Awesome. [00:10:41]Alessio: And just to make it clear for listeners, when we say we're not writing it to memory, we are storing it, but just in a faster memory. So instead of the HBM, we're putting it in the SRAM. Yeah. [00:10:53]Tri: Yeah. [00:10:54]Alessio: Maybe explain just a little bit the difference there. [00:10:56]Tri: Yeah, for sure. This is kind of a caricature of how you think about accelerators or GPUs in particular, is that they have a large pool of memory, usually called HBM, or high bandwidth memory. So this is what you think of as GPU memory. So if you're using A100 and you list the GPU memory, it's like 40 gigs or 80 gigs. So that's the HBM. And then when you perform any operation, you need to move data from the HBM to the compute unit. So the actual hardware unit that does the computation. And next to these compute units, there are on-chip memory or SRAM, which are much, much smaller than HBM, but much faster. So the analogy there is if you're familiar with, say, CPU and RAM and so on. So you have a large pool of RAM, and then you have the CPU performing the computation. But next to the CPU, you have L1 cache and L2 cache, which are much smaller than DRAM, but much faster. So you can think of SRAM as the small, fast cache that stays close to the compute unit. Physically, it's closer. There is some kind of asymmetry here. So HBM is much larger, and SRAM is much smaller, but much faster. One way of thinking about it is, how can we design algorithms that take advantage of this asymmetric memory hierarchy? And of course, lots of folks have been thinking about this. These ideas are pretty old. I think back in the 1980s, the primary concerns were sorting. How can we sort numbers as efficiently as possible? And the motivating example was banks were trying to sort their transactions, and that needs to happen overnight so that the next day they can be ready. And so the same idea applies, which is that they have slow memory, which was hard disk, and they have fast memory, which was DRAM. And people had to design sorting algorithms that take advantage of this asymmetry. And it turns out, these same ideas can apply today, which is different kinds of memory. [00:13:00]Alessio: In your paper, you have the pyramid of memory. Just to give people an idea, when he says smaller, it's like HBM is like 40 gig, and then SRAM is like 20 megabytes. So it's not a little smaller, it's much smaller. But the throughput on card is like 1.5 terabytes a second for HBM and like 19 terabytes a second for SRAM, which is a lot larger. How do you think that evolves? So TSMC said they hit the scaling limits for SRAM, they just cannot grow that much more. HBM keeps growing, HBM3 is going to be 2x faster than HBM2, I think the latest NVIDIA thing has HBM3. How do you think about the future of FlashAttention? Do you think HBM is going to get fast enough when maybe it's not as useful to use the SRAM? [00:13:49]Tri: That's right. I think it comes down to physics. When you design hardware, literally SRAM stays very close to compute units. And so you don't have that much area to essentially put the transistors. And you can't shrink these things too much. So just physics, in terms of area, you don't have that much area for the SRAM. HBM is off-chip, so there is some kind of bus that essentially transfers data from HBM to the compute unit. So you have more area to essentially put these memory units. And so yeah, I think in the future SRAM probably won't get that much larger, because you don't have that much area. HBM will get larger and faster. And so I think it becomes more important to design algorithms that take advantage of this memory asymmetry. It's the same thing in CPU, where the cache is really small, the DRAM is growing larger and larger. DRAM could get to, I don't know, two terabytes, six terabytes, or something, whereas the cache stays at, I don't know, 15 megabytes or something like that. I think maybe the algorithm design becomes more and more important. There's still ways to take advantage of this, I think. So in the future, I think flash attention right now is being used. I don't know if in the next couple of years, some new architecture will come in and whatnot, but attention seems to be still important. For the next couple of years, I still expect some of these ideas to be useful. Not necessarily the exact code that's out there, but I think these ideas have kind of stood the test of time. New ideas like IO awareness from back in the 1980s, ideas like kernel fusions, tiling. These are classical ideas that have stood the test of time. So I think in the future, these ideas will become more and more important as we scale models to be larger, as we have more kinds of devices, where performance and efficiency become much, much more important. [00:15:40]Alessio: Yeah, and we had Jonathan Frankle on the podcast, and if you go to issattentionallyouneed.com, he has an outstanding bet, and he does believe that attention will be the state of the art architecture still in a few years. Did you think flash attention would be this popular? I'm always curious on the research side, you publish a paper, and obviously you know it's great work, but sometimes it just kind of falls flat in the industry. Could you see everybody just starting to use this, or was that a surprise to you? [00:16:11]Tri: Certainly, I didn't anticipate the level of popularity. Of course, we were extremely happy to have people using this stuff and giving us feedback and so on, and help us improve things. I think when we were writing the paper, I remember sending an email to one of my advisors, and like, hey, I'm excited about this paper, but I think the most important thing will be the artifact, which is the code. So I knew that the code will be valuable. So we kind of focus a lot on the code and make sure that the code is usable and as fast as can be. Of course, the idea, the paper presents the ideas and explain it and have experiments that validate the idea, but I knew that the artifact or the code was also pretty important. And that turned out to be the right focus, which is, you know, we put out the paper, we release the code and continue working on the code. So it's a team effort with my co-authors as well. [00:17:07]Alessio: We mentioned Hazy Research a bunch of times on the podcast before. I would love for you to spend five minutes just talking about how does the group work? How do people get together? How do you bounce ideas off of each other? Yeah. [00:17:21]Tri: So Hazy Research is a research group at Stanford led by one of my advisors, Chris Re. I love the people there. It was one of the best experiences I had. They've made my PhD so much more enjoyable. And I think there are a couple of ways that the group has been working pretty well. So one is, I think there's a diverse pool of people who either, you know, some of them focus on algorithms and theory, some of them focus on building systems, some of them focus on applications. And as a result, there is this flow of idea. So as an example, some of us were working on like more algorithms and theory, and then we can talk to the folks building systems and say, hey, let's try it out and let's put it in the systems and see how it is. And there you will get feedback from systems folks. They will say, hey, we implemented this, or we tried this and this is where it doesn't work, something like that. And once we put it in the systems, the application folks can use the algorithm or new methods or new models. And we again get great feedback from them because the application folks, for example, some of my good friends, they focus on medical imaging or seizure detection. And that is the problem they care about. And if your method doesn't work on the task they care about, they will tell you. Whereas I think a lot of people in machine learning, they're a little bit more flexible. So they will be like, hey, it doesn't work on seizure detection. Let's try some other task, right? But having that direct feedback of like, hey, it doesn't work there, let's figure out why. I think that that feedback allows us to do better work. And I think that kind of process of exchanging ideas, validating it in a real system so that applications folks can try it out and give you feedback. That cycle has been very, very useful. And so that's one, having a diverse group of people. The other one is, and this is something I really appreciate from advice from Chris was try to understand the fundamental, right? And he's happy letting me go off and read some textbooks and playing with things because I think a lot of research ideas come from understanding the old literature and see how it fits with the new landscape. And so if you just new archive papers every day, that's great, but you also need to read textbooks. And that's one advice I got from Chris, which is understand the fundamentals. And I think that allows us to do more impactful work. [00:19:46]Alessio: How do you think about academia versus industry? I feel like AI / Machine Learning has been an area where up until three, four years ago, most of the cutting edge work was being done in academia. And now there's all these big industry research labs. You're obviously going to Princeton, so you're an academia believer. How should people think about where to go? Say I'm doing my master's, I have to decide between doing a PhD and going into OpenAI Anthropic. How should I decide? [00:20:15]Tri: I think they kind of play a complementary role, in my opinion. Of course, I also was considering different paths as well. So I think right now, scaling matters a lot, especially when you talk about language models and AI and so on. Scaling matters a lot. And that means that you need compute resources and you need infrastructure and you need engineers time. And so industry tends to have an advantage when it comes to scaling things. But a lot of the ideas actually came from academia. So let's take Attention, which got popular with the Transformer in 2017. Attention actually has been around for a while. So I think the first mention was in 2014, a paper from Bernadot and others and Yoshua Bengio, which is coming from academia. A lot of ideas did come from academia. And scaling things up, of course, I think OpenAI has been great at scaling things up. That was the bet that they made after, I think, GPT-2. So they saw that scaling these things up to back then was 1.5 billion parameter seemed to give you amazing capabilities. So they really committed to that. They really committed to scaling things. And that turned out to be, it's been a pretty successful bet. I think for academia, we're still trying to figure out exactly what we're doing in this shifting landscape. And so lots of folks have been focusing on, for example, evaluation. So I know the Stanford Center for Foundation Model led by Percy, they have this benchmark called HELM, which is this holistic benchmark. So trying to figure out, okay, characterizing the landscape of different kinds of models, what people should evaluate, what people should measure, and things like that. So evaluation is one role. The other one is understanding. So this has happened historically where there's been some development in the industry and academia can play a role in explaining, understanding. They have the luxury to slow down trying to understand stuff, right? So lots of paper on understanding what's really going on, probing these models, and so on. I think I'm not as familiar with the NLP literature, but my impression is there's a lot of that going on in the NLP conferences, which is understanding what these models are doing, what capabilities they have, and so on. And the third one I could see is that the academia can take more risky bets in the sense that we can work on stuff that is quite different from industry. I think industry, my impression is you have some objective. You're trying to say, hey, for this quarter, we want to scale the model in this particular way. Next quarter, we want the model to have these capabilities. You're trying to get objectives that maybe, I don't know, 70% that will work out because it's important for the company's direction. I think for academia, the way things work is you have many, many researchers or PhD students, and they're kind of pursuing independent directions. And they have a little bit more flexibility on, hey, I'm going to try out this seemingly crazy idea and see, let's say there's a 30% chance of success or something. And however you define success, for academia, a lot of the time, success just means like, hey, we found something interesting. That could eventually go into industry through collaboration and so on. So I do see academia and industry kind of playing complementary roles. And as for someone choosing a career, I think just more and more generally, industry would be probably better in terms of compensation, in terms of probably work-life balance. But my biased perspective is that maybe academia gives you a little bit more freedom to think and understand things. So it probably comes down to personal choice. I end up choosing to be a professor next year at Princeton. But of course, I want to maintain a relationship with industry folks. I think industry folks can provide very valuable feedback to what we're doing in academia so that we understand where the field is moving because some of the directions are very much influenced by what, for example, OpenAI or Google is doing. So we want to understand where the field is moving. What are some promising applications? And try to anticipate, okay, if the field is moving like this, these applications are going to be popular. What problems will be important in two, three years? And then we try to start thinking about those problems so that hopefully in two, three years, we have some of the answers to some of these problems in two, three years. Sometimes it works out, sometimes it doesn't. But as long as we do interesting things in academia, that's the goal. [00:25:03]Alessio: And you mentioned the eval side. So we did a Benchmarks 101 episode. And one of the things we were seeing is sometimes the benchmarks really influence the model development. Because obviously, if you don't score well on the benchmarks, you're not going to get published and you're not going to get funded. How do you think about that? How do you think that's going to change now that a lot of the applications of these models, again, is in more narrow industry use cases? Do you think the goal of the academia eval system is to be very broad and then industry can do their own evals? Or what's the relationship there? [00:25:40]Tri: Yeah, so I think evaluation is important and often a little bit underrated. So it's not as flashy as, oh, we have a new model that can do such and such. But I think evaluation, what you don't measure, you can't make progress on, essentially. So I think industry folks, of course, they have specific use cases that their models need to do well on. And that's what they care about. Not just academia, but other groups as well. People do understand what are some of the emerging use cases. So for example, now one of the most popular use cases is Chatbot. And then I think folks from Berkeley, some of them are from Berkeley, call them MLCs. They set up this kind of Chatbot arena to essentially benchmark different models. So people do understand what are some of the emerging use cases. People do contribute to evaluation and measurement. And as a whole, I think people try to contribute to the field and move the field forward, albeit that maybe slightly different directions. But we're making progress and definitely evaluation and measurement is one of the ways you make progress. So I think going forward, there's still going to be just more models, more evaluation. We'll just have better understanding of what these models are doing and what capabilities they have. [00:26:56]Alessio: I like that your work has been focused on not making benchmarks better, but it's like, let's just make everything faster. So it's very horizontal. So FlashAttention 2, you just released that on Monday. I read in the blog post that a lot of the work was also related to some of the NVIDIA library updates. Yeah, maybe run us through some of those changes and some of the innovations there. Yeah, for sure. [00:27:19]Tri: So FlashAttention 2 is something I've been working on for the past couple of months. So the story is the NVIDIA CUTLASS team, they released a new version of their library, which contains all these primitives to allow you to do matrix multiply or memory loading on GPU efficiently. So it's a great library and I built on that. So they released their version 3 back in January and I got really excited and I wanted to play with that library. So as an excuse, I was just like, okay, I'm going to refactor my code and use this library. So that was kind of the start of the project. By the end, I just ended up working with the code a whole lot more and I realized that, hey, there are these inefficiencies still in Flash Attention. We could change this way or that way and make it, in the end, twice as fast. But of course, building on the library that the NVIDIA folks released. So that was kind of a really fun exercise. I was starting out, it's just an excuse for myself to play with the new library. What ended up was several months of improvement, improving Flash Attention, discovering new ideas. And in the end, we managed to make it 2x faster and now it's pretty close to probably the efficiency of things like matrix multiply, which is probably the most optimized subroutine on the planet. So we're really happy about it. The NVIDIA Cutlass team has been very supportive and hopefully in the future, we're going to collaborate more. [00:28:46]Alessio: And since it's an NVIDIA library, can you only run this on CUDA runtimes? Or could you use this and then run it on an AMD GPU? [00:28:56]Tri: Yeah, so it's an NVIDIA library. So right now, the code we release runs on NVIDIA GPUs, which is what most people are using to train models. Of course, there are emerging other hardware as well. So the AMD folks did implement a version of Flash Attention, I think last year as well, and that's also available. I think there's some implementation on CPU as well. For example, there's this library, ggml, where they implemented the same idea running on Mac and CPU. So I think that kind of broadly, the idea would apply. The current implementation ended up using NVIDIA's library or primitives, but I expect these ideas to be broadly applicable to different hardware. I think the main idea is you have asymmetry in memory hierarchy, which tends to be everywhere in a lot of accelerators. [00:29:46]Alessio: Yeah, it kind of reminds me of Sara Hooker's post, like the hardware lottery. There could be all these things that are much better, like architectures that are better, but they're not better on NVIDIA. So we're never going to know if they're actually improved. How does that play into some of the research that you all do too? [00:30:04]Tri: Yeah, so absolutely. Yeah, I think Sara Hooker, she wrote this piece on hardware lottery, and I think she captured really well of what a lot of people have been thinking about this. And I certainly think about hardware lottery quite a bit, given that I do some of the work that's kind of really low level at the level of, hey, we're optimizing for GPUs or NVIDIA GPUs and optimizing for attention itself. And at the same time, I also work on algorithms and methods and transformer alternatives. And we do see this effect in play, not just hardware lottery, but also kind of software framework lottery. You know, attention has been popular for six years now. And so many kind of engineer hours has been spent on making it as easy and efficient as possible to run transformer, right? And there's libraries to do all kinds of tensor parallel, pipeline parallel, if you use transformer. Let's say someone else developed alternatives, or let's just take recurrent neural nets, like LSTM, GRU. If we want to do that and run that efficiently on current hardware with current software framework, that's quite a bit harder. So in some sense, there is this feedback loop where somehow the model architectures that take advantage of hardware become popular. And the hardware will also kind of evolve to optimize a little bit for that kind of architecture and software framework will also evolve to optimize for that particular architecture. Right now, transformer is the dominant architecture. So yeah, I'm not sure if there is a good way out of this. Of course, there's a lot of development. Things like, I think compilers will play a role because compilers allow you to maybe still be much more efficient across different kinds of hardware because essentially you write the same code and compiler will be able to make it run efficiently different kinds of hardware. So for example, there's this language Mojo, they're compiler experts, right? And their bet is AI models will be running on different kinds of devices. So let's make sure that we have really good compilers with a good language that then the compiler can do a good job optimizing for all kinds of devices. So that's maybe one way that you can get out of this cycle. But yeah, I'm not sure of a good way. In my own research, I have to think about both the algorithm new model and how it maps to hardware. So there are crazy ideas that seem really good, but will be really, really difficult to run efficiently. And so as a result, for example, we can't really scale some of the architectures up simply because they're not hardware friendly. I have to think about both sides when I'm working on new models. [00:32:50]Alessio: Yeah. Have you spent any time looking at some of the new kind of like AI chips companies, so to speak, like the Cerebras of the world? Like one of their innovations is co-locating everything on the chip. So you remove some of this memory bandwidth issue. How do you think about that? [00:33:07]Tri: Yeah, I think that's an interesting bet. I think Tesla also has this Dojo supercomputer where they try to have essentially as fast on-chip memory as possible and removing some of these data transfer back and forth. I think that's a promising direction. The issues I could see, you know, I'm definitely not a hardware expert. One issue is the on-chip memory tends to be really expensive to manufacture, much more expensive per gigabyte compared to off-chip memory. So I talked to, you know, some of my friends at Cerebros and, you know, they have their own stack and compiler and so on, and they can make it work. The other kind of obstacle is, again, with compiler and software framework and so on. For example, if you can run PyTorch on this stuff, lots of people will be using it. But supporting all the operations in PyTorch will take a long time to implement. Of course, people are working on this. So I think, yeah, we kind of need these different bets on the hardware side as well. Hardware has, my understanding is, has a kind of a longer time scale. So you need to design hardware, you need to manufacture it, you know, maybe on the order of three to five years or something like that. So people are taking different bets, but the AI landscape is changing so fast that it's hard to predict, okay, what kind of models will be dominant in, let's say, three or five years. Or thinking back five years ago, would we have known that Transformer would have been the dominant architecture? Maybe, maybe not, right? And so different people will make different bets on the hardware side. [00:34:39]Alessio: Does the pace of the industry and the research also influence the PhD research itself? For example, in your case, you're working on improving attention. It probably took you quite a while to write the paper and everything, but in the meantime, you could have had a new model architecture come out and then it's like nobody cares about attention anymore. How do people balance that? [00:35:02]Tri: Yeah, so I think it's tough. It's definitely tough for PhD students, for researchers. Given that the field is moving really, really fast, I think it comes down to understanding fundamental. Because that's essentially, for example, what the PhD allows you to do. It's been a couple of years understanding the fundamentals. So for example, when I started my PhD, I was working on understanding matrix vector multiply, which has been a concept that's been around for hundreds of years. We were trying to characterize what kind of matrices would have theoretically fast multiplication algorithm. That seems to have nothing to do with AI or anything. But I think that was a time when I developed mathematical maturity and research taste and research skill. The research topic at that point didn't have to be super trendy or anything, as long as I'm developing skills as a researcher, I'm making progress. And eventually, I've gotten quite a bit better in terms of research skills. And that allows, for example, PhD students later in their career to quickly develop solutions to whatever problems they're facing. So I think that's just the natural arc of how you're being trained as a researcher. For a lot of PhD students, I think given the pace is so fast, maybe it's harder to justify spending a lot of time on the fundamental. And it's tough. What is this kind of explore, exploit kind of dilemma? And I don't think there's a universal answer. So I personally spend some time doing this kind of exploration, reading random textbooks or lecture notes. And I spend some time keeping up with the latest architecture or methods and so on. I don't know if there's a right balance. It varies from person to person. But if you only spend 100% on one, either you only do exploration or only do exploitation, I think it probably won't work in the long term. It's probably going to have to be a mix and you have to just experiment and kind of be introspective and say, hey, I tried this kind of mixture of, I don't know, one exploration paper and one exploitation paper. How did that work out for me? Should I, you know, having conversation with, for example, my advisor about like, hey, did that work out? You know, should I shift? I focus more on one or the other. I think quickly adjusting and focusing on the process. I think that's probably the right way. I don't have like a specific recommendation that, hey, you focus, I don't know, 60% on lecture notes and 40% on archive papers or anything like that. [00:37:35]Alessio: Let's talk about some Transformer alternatives. You know, say Jonathan Franco loses his bet and Transformer is not the state of the art architecture. What are some of the candidates to take over? [00:37:49]Tri: Yeah, so this bet is quite fun. So my understanding is this bet between Jonathan Franco and Sasha Rush, right? I've talked to Sasha a bunch and I think he recently gave an excellent tutorial on Transformer alternatives as well. So I would recommend that. So just to quickly recap, I think there's been quite a bit of development more recently about Transformer alternatives. So architectures that are not Transformer, right? And the question is, can they do well on, for example, language modeling, which is kind of the application that a lot of people care about these days. So there are methods based on state space methods that came out in 2021 from Albert Gu and Curran and Chris Re that presumably could do much better in terms of capturing long range information while not scaling quadratically. They scale sub-quadratically in terms of sequence length. So potentially you could have a much more efficient architecture when sequence length gets really long. The other ones have been focusing more on recurrent neural nets, which is, again, an old idea, but adapting to the new landscape. So things like RWKV, I've also personally worked in this space as well. So there's been some promising results. So there's been some results here and there that show that, hey, these alternatives, either RNN or state space methods, can match the performance of Transformer on language modeling. So that's really exciting. And we're starting to understand on the academic research side, we want to understand, do we really need attention? I think that's a valuable kind of intellectual thing to understand. And maybe we do, maybe we don't. If we want to know, we need to spend serious effort on trying the alternatives. And there's been folks pushing on this direction. I think RWKV scale up to, they have a model at 14 billion that seems pretty competitive with Transformer. So that's really exciting. That's kind of an intellectual thing. We want to figure out if attention is necessary. So that's one motivation. The other motivation is Transformer Alternative could have an advantage in practice in some of the use cases. So one use case is really long sequences. The other is really high throughput of generation. So for really long sequences, when you train with Transformer, with flash attention and so on, the computation is still quadratic in the sequence length. So if your sequence length is on the order of, I don't know, 16K, 32K, 100K or something, which some of these models have sequence length 100K, then you do get significantly slower in terms of training, also in terms of inference. So maybe these alternative architectures could scale better in terms of sequence length. I haven't seen actual validation on this. Let's say an RNN model release with context length, I don't know, 100K or something. I haven't really seen that. But the hope could be that as we scale to long sequences, these alternative architectures could be more well-suited. Not just text, but things like high resolution images, audio, video, and so on, which are emerging applications. So that's one, long sequences. Number two is a high throughput generation, where I can imagine scenarios where the application isn't like an interactive chatbot, but let's say a company wants to batch as many requests as possible on their server, or they're doing offline processing, they're generating stuff based on their internal documents, that you need to process in batch. And the issue with Transformer is that during generation, it essentially needs to keep around all the previous history. It's called the KV cache. And that could take a significant amount of memory, so you can't really batch too much because you run out of memory. I am personally bullish on RNNs. I think RNNs, they essentially summarize the past into a state vector that has fixed size, so the size doesn't grow with the history. So that means that you don't need as much memory to keep around all the previous tokens. And as a result, I think you can scale to much higher batch sizes. And as a result, you can make much more efficient use of the GPUs or the accelerator, and you could have much higher generation throughput. Now, this, I don't think, has been validated at scale. So as a researcher, I'm bullish on this stuff because I think in the next couple of years, these are use cases where these alternatives could have an advantage. We'll just kind of have to wait and see to see if these things will happen. I am personally bullish on this stuff. At the same time, I also spend a bunch of time making attention as fast as possible. So maybe hatching and playing both sides. Ultimately, we want to understand, as researchers, we want to understand what works, why do the models have these capabilities? And one way is, let's push attention to be as efficient as possible. On the other hand, let's push other alternatives to be as efficient at scale, as big as possible, and so that we can kind of compare them and understand. Yeah, awesome. [00:43:01]Alessio: And I think as long as all of this work happens and open, it's a net positive for everybody to explore all the paths. Yeah, let's talk about open-source AI. Obviously, together, when Red Pajama came out, which was an open clone of the LLAMA1 pre-training dataset, it was a big thing in the industry. LLAMA2 came out on Tuesday, I forget. And this week, there's been a lot of things going on, which they call open-source, but it's not really open-source. Actually, we wrote a post about it that was on the front page of Hacker News before this podcast, so I was frantically responding. How do you think about what open-source AI really is? In my mind, in open-source software, we have different levels of open. So there's free software, that's like the GPL license. There's open-source, which is Apache, MIT. And then there's kind of restricted open-source, which is the SSPL and some of these other licenses. In AI, you have the open models. So Red Pajama is an open model because you have the pre-training dataset, you have the training runs and everything. And then there's obviously RandomLens that doesn't make it one-to-one if you retrain it. Then you have the open-weights model that's kind of like StableLM, where the weights are open, but the dataset is not open. And then you have LLAMA2, which is the dataset is not open, the weights are restricted. It's kind of like not really open-source, but open enough. I think it's net positive because it's like $3 million of flops donated to the public. [00:44:32]Tri: How do you think about that? [00:44:34]Alessio: And also, as you work together, what is your philosophy with open-source AI? Right, right. [00:44:40]Tri: Yeah, I think that's a great question. And I think about it on maybe more practical terms. So of course, Meta has done an amazing job training LLAMA1, LLAMA2. And for LLAMA2, they make it much less restrictive compared to LLAMA1. Now you can use it for businesses, unless you are a monthly active user or something like that. I think just this change will have a very significant impact in the kind of landscape of open-source AI, where now lots of businesses, lots of companies will be using, I expect will be using things like LLAMA2. They will fine-tune on their own dataset. They will be serving variants or derivatives of LLAMA2. Whereas before, with LLAMA1, it was also a really good model, but your business companies weren't allowed to do that. So I think on a more practical term, it's kind of shifting the balance between a closed-source model like OpenAI and Anthropic and Google, where you're making API calls, right? And maybe you don't understand as much of what the model is doing, how the model is changing, and so on. Versus now, we have a model with open weight that is pretty competitive from what I've seen in terms of benchmarks, pretty competitive with GPT 3.5, right? And if you fine-tune it on your own data, maybe it's more well-suited for your own data. And I do see that's going to shift the balance of it. More and more folks are going to be using, let's say, derivatives of LLAMA2. More and more folks are going to fine-tune and serve their own model instead of calling an API. So that shifting of balance is important because in one way, we don't want just a concentration of decision-making power in the hands of a few companies. So I think that's a really positive development from Meta. Of course, training the model takes a couple of millions of dollars, but engineers have and I'm sure they spend tons of time trying many, many different things. So the actual cost is probably way more than that. And they make the weights available and they allow probably a lot of companies are going to be using this. So I think that's a really positive development. And we've also seen amazing progress on the open source community where they would take these models and they either fine-tune on different kinds of data sets or even make changes to the model. So as an example, I think for LLAMA1, the context lane was limited to 2K. Like a bunch of folks figured out some really simple methods to scale up to like 8K. [00:47:12]Alessio: Like the RoPE. [00:47:13]Tri: Yes. I think the open source community is very creative, right? And lots of people. LLAMA2 will, again, kind of accelerate this where more people will try it out. More people will make tweaks to it and make a contribution and then so on. So overall, I think I see that as still a very positive development for the field. And there's been lots of libraries that will allow you to host or fine-tune these models, like even with quantization and so on. Just a couple of hours after LLAMA2 was released, tons of companies announcing that, hey, it's on our API or hosting and so on and together did the same. So it's a very fast-paced development and just kind of a model with available weights that businesses are allowed to use. I think that alone is already a very positive development. At the same time, yeah, we can do much better in terms of releasing data sets. Data sets tend to be... Somehow people are not incentivized to release data sets. So philosophically, yeah, you want to be as open as possible. But on a practical term, I think it's a little bit harder for companies to release data sets. Legal issues. The data sets released tend to be not as eye-catchy as the model release. So maybe people are less incentivized to do that. We've seen quite a few companies releasing data sets together. Released a red pajama data set. I think Cerebus then worked on that and deduplicate and clean it up and release slim pajama and so on. So we're also seeing positive development on that front, kind of on the pre-training data set. So I do expect that to continue. And then on the fine-tuning data set or instruction tuning data set, I think we now have quite a few open data sets on instruction tuning and fine-tuning. But these companies do pay for human labelers to annotate these instruction tuning data set. And that is expensive. And maybe they will see that as their competitive advantage. And so it's harder to incentivize these companies to release these data sets. So I think on a practical term, we're still going to make a lot of progress on open source AI, on both the model development, on both model hosting, on pre-training data set and fine-tuning data set. Right now, maybe we don't have the perfect open source model since all the data sets are available. Maybe we don't have such a thing yet, but we've seen very fast development on the open source side. I think just maybe this time last year, there weren't as many models that are competitive with, let's say, ChatGPT. [00:49:43]Alessio: Yeah, I think the open data sets have so much more impact than open models. If you think about Elusive and the work that they've done, GPT-J was great, and the Pythia models are great, but the Pyle and the Stack, everybody uses them. So hopefully we get more people to contribute time to work on data sets instead of doing the 100th open model that performs worse than all the other ones, but they want to say they released the model. [00:50:14]Tri: Yeah, maybe the question is, how do we figure out an incentive structure so that companies are willing to release open data sets? And for example, it could be like, I think some of the organizations are now doing this where they are asking volunteers to annotate and so on. And maybe the Wikipedia model of data set, especially for instruction tuning, could be interesting where people actually volunteer their time and instead of editing Wikipedia, add annotation. And somehow they acknowledge and feel incentivized to do so. Hopefully we get to that kind of level of, in terms of data, it would be kind of like Wikipedia. And in terms of model development, it's kind of like Linux where people are contributing patches and improving the model in some way. I don't know exactly how that's going to happen, but based on history, I think there is a way to get there. [00:51:05]Alessio: Yeah, I think the Dolly-15K data set is a good example of a company saying, let's do this smaller thing, just make sure we make it open. We had Mike Conover from Databricks on the podcast, and he was like, people just bought into it and leadership was bought into it. You have companies out there with 200,000, 300,000 employees. It's like, just put some of them to label some data. It's going to be helpful. So I'm curious to see how that evolves. What made you decide to join Together? [00:51:35]Tri: For Together, the focus has been focusing a lot on open source model. And I think that aligns quite well with what I care about, of course. I also know a bunch of people there that I know and trust, and I'm excited to work with them. Philosophically, the way they've been really open with data set and model release, I like that a lot. Personally, for the stuff, for example, the research that I've developed, like we also try to make code available, free to use and modify and so on, contributing to the community. That has given us really valuable feedback from the community and improving our work. So philosophically, I like the way Together has been focusing on open source model. And the nice thing is we're also going to be at the forefront of research and the kind of research areas that I'm really excited about, things like efficient training and inference, aligns quite well with what the company is doing. We'll try our best to make things open and available to everyone. Yeah, but it's going to be fun being at the company, leading a team, doing research on the topic that I really care about, and hopefully we'll make things open to benefit the community. [00:52:45]Alessio: Awesome. Let's jump into the lightning round. Usually, I have two questions. So one is on acceleration, one on exploration, and then a takeaway. So the first one is, what's something that already happened in AI machine learning that you thought would take much longer than it has? [00:53:01]Tri: I think understanding jokes. I didn't expect that to happen, but it turns out scaling model up and training lots of data, the model can now understand jokes. Maybe it's a small thing, but that was amazing to me. [00:53:16]Alessio: What about the exploration side? What are some of the most interesting unsolved questions in the space? [00:53:22]Tri: I would say reasoning in the broad term. We don't really know how these models do. Essentially, they do something that looks like reasoning. We don't know how they're doing it. We have some ideas. And in the future, I think we will need to design architecture that explicitly has some kind of reasoning module in it if we want to have much more capable models. [00:53:43]Alessio: What's one message you want everyone to remember today? [00:53:47]Tri: I would say try to understand both the algorithm and the systems that these algorithms run on. I think at the intersection of machine learning system has been really exciting, and there's been a lot of amazing results at this intersection. And then when you scale models to large scale, both the machine learning side and the system side really matter. [00:54:06]Alessio: Awesome. Well, thank you so much for coming on 3. [00:54:09]Tri: This was great. Yeah, this has been really fun. [00:54:11] Get full access to Latent Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

OpenAI just rollicked the AI world yet again yesterday — while releasing the long awaited ChatGPT API, they also priced it at $2 per million tokens generated, which is 90% cheaper than the text-davinci-003 pricing of the “GPT3.5” family. Their blogpost on how they did it is vague: Through a series of system-wide optimizations, we've achieved 90% cost reduction for ChatGPT since December; we're now passing through those savings to API users.We were fortunate enough to record Episode 2 of our podcast with someone who routinely creates 90%+ improvements for their customers, and in fact have started productizing their own infra skills with Codeium, the rapidly growing free-forever Copilot alternative (see What Building “Copilot for X” Really Takes). Varun Mohan is CEO of Exafunction/Codeium, and he indulged us in diving deep into AI infrastructure, compute-optimal training vs inference tradeoffs, and why he loves suffering.Recorded in-person at the beautiful StudioPod studios in San Francisco.Full transcript is below the fold. Timestamps* 00:00: Intro to Varun and Exafunction* 03:06: GPU Efficiency, Model Flop Utilization, Dynamic Multiplexing* 05:30: Should companies own their ML infrastructure?* 07:00: The two kinds of LLM Applications* 08:30: Codeium* 14:50: “Our growth is 4-5% day over day”* 16:30: Latency, Quality, and Correctability* 20:30: Acceleration mode vs Exploration mode* 22:00: Copilot for X - Harvey AI's deal with Allen & Overy* 25:00: Scaling Laws (Chinchilla)* 28:45: “The compute-optimal model might not be easy to serve”* 30:00: Smaller models* 32:30: Deepmind Retro can retrieve external infromation* 34:30: Implications for embedding databases* 37:10: LLMOps - Eval, Data Cleaning* 39:45: Testing/User feedback* 41:00: “Users Is All You Need”* 42:45: General Intelligence + Domain Specific Dataset* 43:15: The God Nvidia computer* 46:00: Lightning roundShow notes* Varun Mohan Linkedin* Exafunction* Blogpost: Are GPUs Worth it for ML* Codeium* Copilot statistics* Eleuther's The Pile and The Stack* What Building “Copilot for X” Really Takes* Copilot for X* Harvey, Copilot for Law - deal with Allen & Overy* Scaling Laws* Training Compute-Optimal Large Language Models - arXiv (Chinchilla paper)* chinchilla's wild implications (LessWrong)* UL2 20B: An Open Source Unified Language Learner (20B)* Paper - Deepmind Retro* “Does it make your beer taste better”* HumanEval benchmark/dataset* Reverse Engineering Copilot internals* Quora Poe* Prasanna Sankar notes on FLOPs and Bandwidth* NVIDIA H100 specs - 3TB/s GPU memory, 900GB/s NVLink Interconnect* Optimizer state is 14x size of model - 175B params => 2.5TB to store state → needs at least 30 H100 machines with 80GB each* Connor Leahy on The Gradient PodcastLightning Rounds* Favorite AI Product: Midjourney* Favorite AI Community: Eleuther and GPT-J* One year prediction: Better models, more creative usecases* Request for Startup: Superathlete Fitness Assistant* Takeaway: Continue to tinker!Transcript[00:00:00] Alessio Fanelli: Hey everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. I'm joined by my cohost, swyx, writer, editor of L Space Diaries.[00:00:20] swyx: Hey, and today we have Varun Mohan from Codeium / Exafunction on. I should introduce you a little bit because I like to get the LinkedIn background out of the way.[00:00:30] So you did CS at MIT and then you spent a few years at Nuro where you were ultimately tech lead manager for autonomy. And that's an interesting dive. Self-driving cars in AI and then you went straight into Exafunction with a few of your coworkers and that's where I met some of them and started knowing about Exafunction.[00:00:51] And then from out of nowhere you cloned GitHub Copilot. That's a lot of progress in a very short amount of time. So anyway, welcome .[00:00:59] Varun Mohan: That's high praise.[00:01:00] swyx: What's one thing about you that doesn't appear on LinkedIn that is a big part of what people should know?[00:01:05] Varun Mohan: I actually really like endurance sports actually.[00:01:09] Like I, I've done multiple triathlons. I've actually biked from San Francisco to LA. I like things that are like suffering. I like to suffer while I, while I do sports. Yeah.[00:01:19] swyx: Do you think a lot about like code and tech while you're doing those endurance sports or are you just,[00:01:24] Varun Mohan: your mind is just focused?[00:01:26] I think it's maybe a little bit of both. One of the nice things about, I guess, endurance athletics, It's one of the few things you can do where you're not thinking about, you can't really think about much beyond suffering. Like you're climbing up a hill on a bike and you see like, uh, you see how many more feet you need to climb, and at that point you're just struggling.[00:01:45] That's your only job. Mm-hmm. . Yeah. The only thing you can think of is, uh, pedaling one more pedal. So it's actually like a nice, a nice way to not think about work. Yeah,[00:01:53] Alessio Fanelli: yeah, yeah. Maybe for the audience, you wanna tell a bit about exa function, how that came to be and how coding came out[00:01:59] Varun Mohan: of that. So a little bit about exo function.[00:02:02] Before working at exa function, I worked at Neuro as Sean was just saying, and at neuro, I sort of managed large scale offline deep learning infrastructure. Realized that deep learning infrastructure is really hard to build and really hard to maintain for even the most sophisticated companies, and started exa function to basically solve that gap, to make it so that it was much easier for companies.[00:02:24] To serve deep learning workloads at scale. One of the key issues that we noticed is GPUs are extremely hard to manage fundamentally because they work differently than CPUs. And once a company has heterogeneous hardware requirements, it's hard to make sure that you get the most outta the hardware. It's hard to make sure you can get, get great GPU utilization and exa function was specifically built to make it so that you could get the most outta the hardware.[00:02:50] Make sure. Your GP was effectively virtualized and decoupled from your workload to make it so that you could be confident that you were running at whatever scale you wanted without burning the bank.[00:03:00] swyx: Yeah. You gave me this metric about inefficiency,[00:03:03] Varun Mohan: right? Oh, okay. Like flop efficiency. Yeah. Yeah. So basically, I think it comes down to, for most people, one of the things about CPUs that's really nice is with containers, right?[00:03:13] You can end up having a single. You can place many containers on them and all the containers will slowly start eating the compute. It's not really the same with GPUs. Like let's say you have a single. For the most part, only have one container using that gpu. And because of that, people heavily underestimate what a single container can sort of do.[00:03:33] And the GPU is left like heavily idle. And I guess the common term now with a lot of LM workloads is like the flop efficiency of these workloads. M F U, yeah. Yeah. Model flop utilization. The model flop utilization, which is basically like what fraction of the flops or compute on the hardware is actually getting used.[00:03:49] And sort of what we did at exa function. Not only make it so that the model was always running, we also built compiler technology to make it so that the model was also running more efficiently. And some of these things are with tricks like operator fusion, like basically you could imagine fusing two operations together such that the time it takes to compute.[00:04:07] the fused operation is lower than the time it takes for each individual operation. Oh my God. Yeah. .[00:04:13] Alessio Fanelli: Yeah. And you have this technique called dynamic multiplexing, which is basically, instead of having a one-to-one relationship, you have one GP for multiple clients. And I saw one of your customers, they went from three clients to just one single GPU and the cost by 97%.[00:04:29] What were some of those learning, seeing hardware usage and efficiencies and how that then played into what, what[00:04:34] Varun Mohan: you're building? Yeah, I think it basically showed that there was probably a gap with even very sophisticated teams. Making good use of the hardware is just not an easy problem. I think that was the main I, it's not that these teams were like not good at what they were doing, it's just that they were trying to solve a completely separate problem.[00:04:50] They had a model that was trained in-house and their goal was to just run it and it, that should be an easy. Easy thing to do, but surprisingly still, it's not that easy. And that problem compounds in complexity with the fact that there are more accelerators now in the cloud. There's like TPUs, inferential and there's a lot of decisions, uh, that users need to make even in terms of GPU types.[00:05:10] And I guess sort of what we had was we had internal expertise on what the right way to run the workload was, and we were basically able to build infrastructure and make it so that companies could do that without thinking. So most[00:05:21] Alessio Fanelli: teams. Under utilizing their hardware, how should they think about what to own?[00:05:26] You know, like should they own the appearance architecture? Like should they use Xlo to get it to production? How do you think[00:05:32] Varun Mohan: about it? So I think one thing that has proven to be true over the last year and a half is companies, for the most part, should not be trying to figure out what the optimal ML architecture is or training architecture is.[00:05:45] Especially with a lot of these large language models. We have generic models and transformer architecture that are solving a lot of distinct problems. I'll caveat that with most companies. Some of our customers, which are autonomous vehicle companies, have extremely strict requirements like they need to be able to run a model at very low latency, extremely high precision recall.[00:06:05] You know, GBT three is great, but the Precision Recall, you wouldn't trust someone's life with that, right? So because of that, they need to innovate new kinds of model architectures. For a vast majority of enterprises, they should probably be using something off the shelf, fine tuning Bert models. If it's vision, they should be fine tuning, resonant or using something like clip like the less work they can do, the better.[00:06:25] And I guess that was a key turning point for us, which is like we start to build more and more infrastructure for the architectures that. The most popular and the most popular architecture was the transformer architecture. We had a lot of L L M companies explicitly reach out to us and ask us, wow, our GT three bill is high.[00:06:44] Is there a way to serve G P T three or some open source model much more cheaply? And that's sort of what we viewed as why we were maybe prepared for when we internally needed to deploy transform models our.[00:06:58] Alessio Fanelli: And so the next step was, Hey, we have this amazing infrastructure. We can build kind of consumer facing products, so to speak, at with much better unit economics, much better performance.[00:07:08] And that's how code kind[00:07:10] Varun Mohan: of came to be. Yeah. I think maybe the, the play is not maybe for us to be just, we make a lot of consumer products. We want to make products with like clear ROI in the long term in the enterprise. Like we view code as maybe one of those things. Uh, and maybe we can, we can talk about code maybe after this.[00:07:27] We. Products like co-pilot as being extremely valuable and something that is generating a lot of value to professionals. We saw that there was a gap there where a lot of people probably weren't developing high intensive L L M applications because of cost, because of the inability to train models the way they want to.[00:07:44] And we thought we could do that with our own infrastructure really quickly.[00:07:48] swyx: I wanna highlight when you say high intensive, you mean basically generate models every key, uh, generate inferences on every keystroke? That's[00:07:55] Varun Mohan: right. Yeah. So I would say like, there's probably two kinds of L l M applications here.[00:07:59] There's an L L M application where, you know, it rips through a bunch of data and maybe you wait a couple minutes and then you see something, and then there's an application where the quality is not exactly what you want, but it's able to generate enough, sorry, low enough latency. It's still providing a ton of value.[00:08:16] And I will say there's like a gap there where the number of products that have hit that co-pilot spot is actually not that high. Mm. A lot of them are, are kind of like weight and, you know, just generate a lot of stuff and see what happens because one is clearly more compute intensive than the other Basically.[00:08:31] swyx: Well co uh, I don't know if we told the whole story yet, you were going to[00:08:35] Varun Mohan: dive into it. . Yeah, so I guess, I guess the story was I guess four or five months ago we sort of decided internally as a team we were like very early adopters of co-pilot. I'm not gonna sit here and say co-pilot, it's not a great tool.[00:08:45] We love co-pilot. It's like a fantastic tool. We all got on the beta. The moment it came out we're like a fairly small T, but we, like we all got in, we were showing each other completions. We end up writing like a lot of cuda and c plus plus inside the company. And I think there was probably a thought process within us that was like, Hey, the code we write is like very high aq.[00:09:04] You know? So like there's no way it can help. And one of the things in c plus plus that's like the most annoying is writing templates. Writing template programming is maybe one of those things. No one, maybe there's like some people in the C plus O standards community that can do it without looking at the, looking at anything online.[00:09:19] But we struggle. We struggle writing bariatric templates and COPA just like ripped through. Like we had a 500 line file and it was just like writing templates like, and we didn't really even test it while we were running it. We then just compiled it and it just, We're like, wow. Like this is actually something that's not just like it's completing four loops, it's completing code for us.[00:09:38] That is like hard in our brains to reach, but fundamentally and logically is not that complicated. The only reason why it's complicated is there's just a lot of rules, right. And from then we were just like, wow, this is, that was maybe the first l l m application for us internally, because we're not like marketers that would use, uh, Jasper, where we were like, wow, this is like extremely valuable.[00:09:58] This is not a toy anymore. So we wanted to take our technology to build maybe apps where these apps were not gonna be toys, right? They were not gonna be like a demo where you post it on Twitter and then you know there's hype and then maybe like a month later, no one's using.[00:10:11] swyx: There's a report this morning, um, from co-pilot where they, they were estimating the key tabs on amount of code generated by a co-pilot that is then left in code repos and checked in, and it's something like 60 to 70%[00:10:24] Varun Mohan: That's, that's nuts, but I totally believe it given, given the stats we have too. There's this flips in your head once you start using products like this, where in the beginning there's like, there's like skepticism, like how, how valuable can it be? And suddenly now like user behavior fundamentally changes so that now when I need to write a function, I'm like documenting my code more because I think it's prompting the model better, right?[00:10:43] So there's like this crazy thing where it's a self-fulfilling prophecy where when you get more value from it, more of your code is generated. From co-pilot[00:10:50] swyx: just to walk through the creation process, I actually assumed that you would have grabbed your data from the pile, which is the Luther ai, uh, open source, uh, code information.[00:11:00] But apparently you scraped your own[00:11:01] Varun Mohan: stuff. Yeah. We ended up basically using a lot of open, I guess, permissively licensed code, uh, in the public internet, mainly because I think also the pile is, is fairly a small subset. Uh, I think maybe after we started there was the, that was also came to be, but for us, we had a model for ourselves even before that, uh, was the point.[00:11:21] Ah, okay. So the timing was just a little bit off. Yeah, exactly. Exactly. But it's awesome work. It's, it seems like there's a good amount of work that's getting done Decentrally. Yeah. Which is a little bit surprising to me because I'm like more bullish on everyone needs to get together in a room and make stuff happen.[00:11:35] Like we're all in person in Mountain View. But yeah, no, it's pretty impressive. Yeah. Luther in general, like everything they've done, I'm pretty impressed with it. Yeah, and we're[00:11:42] swyx: gonna talk about that. Cause I, I didn't know you were that involved in the community[00:11:45] Varun Mohan: that early on I wasn't involved. It was more of like a, I was watching and maybe commenting from time to time.[00:11:50] So they're a very special community for sure. Yeah,[00:11:52] swyx: yeah, yeah. That's true. That's true. My impression is a bunch of you are geniuses. You sit down together in a room and you. , get all your data, you train your model, like everything's very smooth sailing. Um, what's wrong with that[00:12:02] Varun Mohan: image? Yeah, so probably a lot of it just in that a lot of our serving infrastructure was already in place, Uhhuh before then.[00:12:09] So like, hey, we were able to knock off one of these boxes that I think a lot of other people maybe struggle with. The open source serving offerings are just, I will say, not great in that. That they aren't customized to transformers and these kind of workloads where I have high latency and I wanna like batch requests, and I wanna batch requests while keeping latency low.[00:12:29] Mm-hmm. , right? One of the weird things about generation models is they're like auto regressive, at least for the time being. They're auto aggressive. So the latency for a generation is a function of the amount of tokens that you actually end up generating. Like that's like the math. And you could imagine while you're generating the tokens though, unless you batch a.[00:12:46] It's gonna end up being the case that you're not gonna get great flop utilization on the hardware. So there's like a bunch of trade offs here where if you end up using something completely off the shelf, like one of these serving thing, uh, serving frameworks, you're gonna end up leaving a lot of performance on the table.[00:13:00] But for us, we were already kind of prepared. To sort of do that because of our infrastructure that we had already built up. And probably the other thing to sort of note is early on we were able to leverage open source models, sort of bootstrap it internally within our company, but then to ship, we finally had some requirements like, Hey, we want this model to have fill in the middle capabilities and a bunch of other things.[00:13:20] And we were able to ship a model ourselves. So we were able to time it so that over the course of multiple months, different pieces were like working out properly for us. So it wasn't. . You know, we started out and we were just planning the launch materials. The moment we started there was like maybe some stuff that was already there, some stuff that we had already figured out how to train models at scale internally.[00:13:38] So we were able to just leverage that muscle very quickly. I think the one[00:13:41] swyx: thing that you had figured out from the beginning was that it was gonna be free forever. Yeah. Yeah, co-pilot costs $10[00:13:47] Varun Mohan: a month. Co-pilot costs $10 a month. I would argue significantly more value than $10 a month. The important thing for us though, was we are gonna continue to build more great products on top of code completion.[00:13:58] We think code completion is maybe day one of what the future looks like. And for that, clearly we can't be a product that's like we're $10 a month and we're adding more products. We want a user base that loves using us. And we'll continue to stay with us as we continue to layer on more products. And I'm sure we're gonna get more users from the other products that we have, but we needed some sort of a differentiator.[00:14:17] And along the way we realized, hey, we're pretty efficient at running these workloads. We could probably do this. Oh, so it wasn't,[00:14:23] swyx: it was a plan to be free from the start. You just[00:14:25] Varun Mohan: realized we, yeah. We realized we could probably, if we cut and optimized heavily, we could probably do this properly. Part of the reasoning here was we were confident we could probably build a pro tier and go to the enter.[00:14:35] But for now, originally when we, when we started, we weren't like, we're just gonna go and give every, all pieces of software away for free. That wasn't like sort of the goal there. And[00:14:43] swyx: since you mentioned, uh, adoption and, you know, traction and all that, uh, what can you disclose about user growth? Yeah, user adoption.[00:14:50] Varun Mohan: Yeah. So right now we have. We probably have over 10,000 users and thousands of daily actives, and people come back day over day. Our growth is like around, you know, four to 5% day over day right now. So all of our growth right now is sort of like word of mouth, and that's fundamentally because like the product is actually one of those products where.[00:15:08] Even use COT and use us, it's, it's hard to tell the difference actually. And a lot of our users have actually churned off of cot isn't Yeah. I,[00:15:14] swyx: I swept Yeah. Yeah. To support you guys, but also also to try[00:15:17] Varun Mohan: it out. Yeah, exactly. So the, the crazy thing is it wasn't like, Hey, we're gonna figure out a marketing motion of like, Going to the people that have never heard of co-pilot and we're gonna like get a bunch of users.[00:15:27] We wanted to just get users so that in our own right we're like a really great product. Uh, and sort of we've spent a lot of engineering time and obviously we co-wrote a blog post with you, Sean, on this in terms of like, there's a lot of engineering work, even beyond the latency, making sure that you can get your cost down to make a product like this actually work.[00:15:44] swyx: Yeah. That's a long tail of, of stuff that you referenced,[00:15:47] Varun Mohan: right? Yes. Yeah, exactly.[00:15:48] swyx: And you, you said something to the order of, um, and this maybe gets into co-pilot for X uh, which is something that everybody is keen about cuz they, they see the success of co-pilot. They're like, okay, well first of all, developer tools, there's more to do here.[00:16:00] And second of all, let's say the co-pilot idea and apply for other disciplines. I don't know if you wanna Yeah.[00:16:06] Varun Mohan: There's[00:16:06] Alessio Fanelli: gonna some. Key points that, that you touched on. Um, how to estimate, inference a scale, you know, and the latency versus quality trade-offs. Building on first party. So this is free forever because you run your own models, right?[00:16:19] That's right. If you were building on open ai, you wouldn't be able to offer it for free real-time. You know, when I first use coding, It was literally the same speed as Copi is a little bit[00:16:29] swyx: faster. I don't know how to quantify it,[00:16:31] Varun Mohan: but we are faster. But it's one of those things that we're not gonna like market as that's the reason because it's not in and of itself a right for you to like, I'm just gonna be open with you.[00:16:39] It's not a reason for you to like suddenly turn off a copilot where if our answers were trash, uh, but we were faster. You know what I mean? But your focus[00:16:46] Alessio Fanelli: was there. We used the alpha, I think prem on our discord came to us and say, you guys should try this out. So it was really fast. Even then, prompt optimization is another big thing, and model outputs and UX kind of how you bring them together.[00:17:00] Which ones of these things are maybe like the one or two that new founders should really think about first?[00:17:07] Varun Mohan: Yeah, I think, I think my feeling on this is unless you are ex, you probably should always bootstrap on top of an existing a. Because like even if you were to, the only reason why we didn't is because we knew that this product was actually buildable.[00:17:22] Probably if we worked hard enough to train a model, we would actually be able to build a great product already. But if you're actually going out and trying to build something from scratch, unless you genuinely believe, I need to fine tune on top of, you know, terabytes of data terabyte is a very large amount of data, but like tens of gigabytes of data.[00:17:37] Probably go out and build on top of an API and spend most of your time to make it so that you can hit that quality latency trade off properly. And if I were to go out and think about like the three categories of like an LM product, it's probably like latency, quality, and correct ability. The reality is, you know, if I were to take a product like co-pilot or Coum, the latency is very low.[00:17:58] The quality I think, is good enough for the task, but the correct ability is, is very easy. Credibility. What, what is correct ability? Correct ability means, let's say the quality is not there. Like you consider the the case where, The answer is wrong. How easy is it for your user to actually go and leverage parts of the generation?[00:18:16] Maybe a, a concrete example. There's a lot of things people are excited about right now where I write a comment and it generates a PR for me, and that's like, that's like really awesome in theory. I think that's like a really cool thing and I'm sure at some point we will be able to get there. That will probably require an entirely new model for what it's worth that's trained on diffs and commits and all these other things that looks at like improvements and code and stuff.[00:18:37] It's probably not gonna be just trained on generic code. But the problem with those, those sort of, I would say, applications are that, let's suppose something does change many files, makes large amounts of changes. First of all, it's guaranteed not gonna be. Because even the idea of like reviewing the change takes a long time.[00:18:54] So if the quality and the correct ability is just not there, let's say you had 10 file, a 10 file change and you modified like, you know, file two and four, and those two modifications were consistent, but the other eight files were not consistent. Then suddenly the correct ability is like really hard.[00:19:10] It's hard to correct the output of the model. And so the user interface is 100% really important. But maybe until you get the latency down or the correct ability, like correct ability, like a lot better, it's probably not gonna be shippable. And I think that's what you gotta spend your time focusing on.[00:19:26] Can you deliver a product that is actually something users want to use? And I think this is why I was talking about like demo. It's like very easy to hand to handpick something that like works, that works for a demo, exceedingly hard for something that has large scope, like a PR to work consistently. It will take a lot of engineering effort to make it work on small enough chunks so that a user is like, wow, this is value generative to me.[00:19:49] Because eroding user trust or consumer trust is very easy. Like that is, it is is much, much, it's very easy to erode user trust versus enterprise. So just be mindful of that, and I think that's probably like the mantra that most of these companies need to operate under. Have you done any[00:20:05] Alessio Fanelli: analysis on. What the ratio between code generated and latency is.[00:20:11] So you can generate one line, but you could also generate the whole block. You can generate Yeah. A whole class and Yeah. You know, the more you generate the, the more time it takes. Like what's the sweet spot that, that you[00:20:21] Varun Mohan: found? Yeah, so I think there was a great study and, and I'm not sure if it's possible to link it, but there was a great study about co-pilot actually that came out.[00:20:28] Basically what they said was there were two ways that developers usually develop with a code assistant technology. They're either in what's called like acceleration mode or exploration mode. And exploration mode is basically you're in the case where you don't even know what the solution space for the function is.[00:20:43] and you just wanna generate a lot of code because you don't even know what that looks like. Like it might use some API that you've never heard of. And what you're actually doing at that point is like you're writing a clean comment, just wishing and praying that you know, the generation is long enough and gets you, gets you far enough, right?[00:20:57] acceleration mode is basically you are doing things where you are very confident in what you're doing and effectively. Code gives you that muscle so that you can basically stay in flow state and you're not thinking about like exactly what the APIs look like, but push comes to shove. You will figure out what the APIs look like, but actually like mentally, it takes off like a load in your head where you're like, oh wow.[00:21:18] Like I can just do this. The intent to execution is just a lot, a lot lower there. And I think effectively you want a tool that captures that a little bit. And we have heuristics in terms of captur. Whether or not you're in acceleration versus exploration mode. And a good heuristic is, let's say you're inside like a basic block of a piece of code.[00:21:37] Let's say you're inside a a block of code or an IF statement, you're probably already in acceleration mode and you would feel really bad if I started generating the ELs clause. Because what happens if that else causes really wrong? That's gonna cause like mental load for you because you are the way programmers think.[00:21:51] They only want to complete the if statement first, if that makes sense. So there are things where we are mindful of like how many lines we generate if you use the product, like multi-line generations happen and we are happy to do them, but we don't want to do them when we think it's gonna increase load on developers, if that makes sense.[00:22:07] That[00:22:07] Alessio Fanelli: makes sense. So co-pilot for x. , what are access that you think are interesting for people to build[00:22:13] Varun Mohan: in? Didn't we see some, some tweet recently about Harvey ai, uh, company that, that is trying to sell legal? It's like a legal, legal assistance. That's, that's pretty impressive, honestly. That's very impressive.[00:22:23] So it seems like I would really love to see what the product looks like there, because there's a lot of text there. You know, looking at bing, bing, ai, like, I mean, it's, it's pretty cool. But it seems like groundedness is something a lot of these products struggle with, and I assume legal, if there's one thing you want them to.[00:22:39] To get right. It's like the groundedness. Yeah.[00:22:42] swyx: Yeah. I've made the analogy before that law and legal language is basically just another form of programming language. You have to be that precise. Yes. Definitions must be made, and you can scroll to find the definition. It's the same thing. Yes. ,[00:22:55] Varun Mohan: yes. Yeah. But like, I guess there's a question of like comprehensiveness.[00:22:59] So like, let's say, let's say the only way it generates a suggestion is it provides like, you know, citations to other legal. You don't want it to be the case that it misses things, so you somehow need the comprehensiveness, but also at the same time, you also don't want it to make conclusions that are not from the site, the things at sites.[00:23:15] So, I don't know, like that's, that's very impressive. It's clear that they've demonstrated some amount of value because they've been able to close a fairly sizable enterprise contract. It was like a firm with 3,500 lawyers, something nuts, honestly. Very cool. So it's clear this is gonna happen, uh, and I think people are gonna need to be clever about how they actually make it work.[00:23:34] Within the constraints of whatever workload they're operating in. Also, you, you guys[00:23:37] swyx: are so good at trading stuff, why don't you, you try[00:23:39] Varun Mohan: cloning it. Yeah. So I think, I think that's, that's, uh, preview the roadmap. Yeah, yeah, yeah, yeah. No, no, no, but I'm just kidding. I think one of the things that we genuinely believe as a startup is most startups can't really even do one thing properly.[00:23:52] Mm-hmm. Focus. Yeah. Yeah. Usually doing one thing is really hard. Most companies that go public have like maybe a couple big products. They don't really have like 10, so we're under no illusions. Give the best product experience, the amount of engineering and attention to detail, to build one good product as hard.[00:24:08] So it's probably gonna be a while before we even consider leaving code. Like that's gonna be a big step because the amount of learning we need to do is gonna be high. We need to get users right. We've learned so much from our users already, so, yeah, I don't think we'd go into law anytime soon.[00:24:22] swyx: 3,500 lawyers with Ellen and Ry, uh, is, is is apparently the, the new[00:24:27] Varun Mohan: That's actually really big.[00:24:28] Yeah. Yeah. I can congrat.[00:24:29] swyx: Yeah, it's funny cuz like, it seems like these guys are moving faster than co-pilot. You know, co-pilot just launched, just announced enterprise, uh, like co-pilot for teams or co-pilot for Enterprise. Yeah. After like two years of testing.[00:24:40] Varun Mohan: Yeah, it does seem like the co-pilot team has built a very, very good product.[00:24:44] Um, so I don't wanna like say anything, but I think it is the case to startups will be able to move faster. I feel like that is true, but hey, like GitHub has great distribution. Whatever product they do have, they will be able to sell it really. Shall[00:24:56] swyx: we go into model numbers and infra estimates? our favorite[00:25:01] Varun Mohan: topics.[00:25:02] Nice small models. Nice.[00:25:04] swyx: So this is, um, relevant to basically I'm researching a lot of skilling law stuff. You have a lot of thoughts. You, you host paper discussions[00:25:12] Varun Mohan: in your team. Yeah, we, we try to like read papers that we think are really interesting and relevant to us. Recently that's been, there's just a fire hose of papers.[00:25:21] You know, someone even just curating what papers we should read internally as a company. Yeah, I think, I think there's, there's so much good content[00:25:28] swyx: out there. You should, you guys should have a podcast. I mean, I told you this before. Should have a podcast. Just, just put a mic near where, where you guys are[00:25:33] Varun Mohan: talking.[00:25:34] We gotta, we gotta keep developing coding though, . No, but you're doing this discussion[00:25:38] swyx: anyway. You[00:25:38] Varun Mohan: might as well just, oh, put the discussion on a podcast. I feel like some of the, some of the thoughts are raw, right? Like, they're not gonna be as, as nuanced. Like we'll just say something completely stupid during our discussions.[00:25:48] I don't know, , maybe that's exciting. Maybe that's, it's kinda like a justin.tv, but for ML papers, Okay, cool. I watched that.[00:25:55] swyx: Okay, so co-pilot is 12 billion parameters. Salesforce cogen is up to 16. G P t three is 175. GP four is gonna be 100 trillion billion. Yeah. So what, what we landed on with you is with, uh, with Cilla, is that we now have an idea of what compute optimal data scaling is.[00:26:14] Yeah. Which is about 20 times parameters. Is that intuitive to you? Like what, what did that[00:26:18] Varun Mohan: unlock? I think basically what this shows is that bigger models are like more data efficient, like given the same number of tokens, a big model like trained on the same number of tokens. A bigger model is like, is gonna learn more basically.[00:26:32] But also at the same time, the way you have to look at it is there are more flops to train a bigger model on the same number of tokens. So like let's say I had a 10 billion parameter model and I trained it on on 1 million tokens, but then I had a 20 billion parameter model at the end of it will be a better.[00:26:47] It will have better perplexity numbers, which means like the probability of like a prediction is gonna be better for like the next token is gonna be better. But at the end of it, you did burn twice the amount of compute on it. Right? So Shinto is an interesting observation, which says if you have a fixed compute budget, And you want the best model that came out of it because there's like a difference here where a model that is, that is smaller, trained on the same number of tokens as fewer flops.[00:27:12] There's a a sweet spot of like number of tokens and size a model. I will say like people probably like. Are talking about it more than they should, and, and I'll, I'll explain why, but it's a useful result, which is like, let's say I have, you know, some compute budget and I want the best model. It tells you what that, what you should generate.[00:27:31] The problem I think here is there is a real trade off of like, you do need to run this model somewhere. You need to run it on a piece of hardware. So then it comes down to how much memory does that piece of hardware have. Let's say for a fixed compute budget, you could train a 70 billion parameter. What are you gonna put that on?[00:27:47] Yeah, maybe you could, could you put that on an 80 gig, A 100? It would be a stretch. You could do things like f, you know, in eight F p a, to reduce the amount of memory that's on the box and do all these other things. But you have to think about that first, right? When you want to go out and train that model.[00:27:59] The worst case is you ended up training that mo, that model, and you cannot serve it. So actually what you end up finding is for a lot of these code completion models, they are actually what you would consider over-trained . So by that I mean like, let's look at a model like Cogen. It's actually trained on, I believe, and, and I could be wrong by, you know, a hundred billion here or there.[00:28:18] I got some data. Oh, okay. Let's look at the 3 billion parameter model. It's a 2.7. I think it's actually a 2.7 billion barometer model. It's weird because they also trained on natural language on top of code, but it's trained on hundreds of billions of tokens. If you applied that chinchilla, Optimization to it, you'd be like, wow, this is, this is a stupid use of compute.[00:28:36] Right? Because three, they should be going to 60, any anything more than 60. And they're like, they should have just increased the model size. But the reality is if they had like the compute optimal one might not be one that's easy to serve, right? It could just have more parameters. And for our case, our models that we train internally, they might not be the most compute.[00:28:56] In other words, we probably could have had a better model by making it larger, but the trade off would've been latency. We know what the impact of having higher latency is, and on top of that, being able to fit properly on our hardware constraints would've also been a concern.[00:29:08] swyx: Isn't the classic stopping point when you, you see like loss kind of levels off.[00:29:12] Right now you're just letting chinchilla tell you,[00:29:16] Varun Mohan: but like you should just look at loss. The problem is the loss will like continue to go down. It'll just continue to go down like, like in a, in a way that's like not that pleasing. It's gonna take longer and longer. It's gonna be painful, but it's like one of those things where if you look at the perplexity number of difference between.[00:29:31] Let's say a model that's like 70 billion versus 10 billion. It's not massive. It's not like tens of percentage points. It's like very small, right? Mm. The reality is here, like, I mean this comes down to like IQ of like these models in some sense, like small wins at the margins are massive wins in terms of iq.[00:29:47] Like it's harder to get those and they don't look as big, but they're like massive wins in terms of reasoning. They can now do chain of thought, all these other things. Yeah, yeah, yeah.[00:29:55] swyx: It's, and, and so apparently unlocked around the[00:29:57] Varun Mohan: 20 billion. Yes. That's right. Some kind of magic. Yeah. I think that was from the UL two or maybe one of those land papers.[00:30:03] Any thoughts on why? Like is there is? I don't know. I mean, emergence of intelligence, I think. I think maybe one of the things is like we don't even know, maybe like five years from now of what we're gonna be running are transformers. But I think it's like, we don't, we don't 100% know that that's true. I mean, there's like a lot of maybe issues with the current version of the transformers, which is like the way attention works, the attention layers work, the amount of computers quadratic in the context sense, because you're like doing like an n squared operation on the attention blocks basically.[00:30:30] And obviously, you know, one of the things that everyone wants right now is infinite context. They wanna shove as much prop as possible in here. And the current version of what a transformer looks like is maybe not ideal. You might just end up burning a lot of flops on this when there are probably more efficient ways of doing it.[00:30:45] So I'm, I'm sure in the future there's gonna be tweaks to this. Yeah. Uh, but it is interesting that we found out interesting things of like, hey, bigger is pretty much always better. There are probably ways of making smaller models significantly better through better data. That is like definitely true. Um, And I think one of the cool things that the stack showed actually was they did a, like a, I think they did some ablation studies where they were like, Hey, what happens if we do, if we do decontamination of our data, what happens if we do de-duplication?[00:31:14] What happens if we do near dup of our data and how does the model get better? And they have like some compelling results that showcase data quality really matters here, but ultimately, Yeah, I think it is an interesting result that at 20 billion there's something happening. But I also think like some of these things in the future may look materially different than what they look like right now.[00:31:30] Hmm. Do you think[00:31:31] Alessio Fanelli: the token limitation is actually a real architectural limitation? Like if you think about the tokens need as kind of like atic, right? Like once you have. 50,000 tokens context, like 50,000 or infinite. For most use cases, it's like the same. Where do you think that number is, especially as you think about code, like some people have very large code bases, there's a lot.[00:31:53] Have you done any work there to figure out where the sweet[00:31:55] Varun Mohan: spot is? Yeah, look, I think what's gonna really end up happening is if people come up with a clever way and, and it, there was some result research that I believe came out of Stanford. I think the team from the Helm group, I think came out with some architecture that looks a little bit different than Transformers, and I'm sure something like this will work in the future.[00:32:13] What I think is always gonna happen is if you find a cheap way to embed context, people are gonna figure out a way to, to put as much as possible in because L LM so far have been like virtually stateless. So the only thing that they have beyond fine tuning is like just shoveling everything you can inside.[00:32:28] And there are some interesting papers, like retro, actually there are maybe some interesting pieces of thought like ideas that have come out recently. Yeah, let's go through them. So one of the really interesting ideas, I think is retro. It's this paper that came out of DeepMind and the idea is actually, let's say you send out, you send out, uh, a prompt.[00:32:44] Okay? Send out a prompt. You compute the burt embedding of that. And then you have this massive embedding database. And by massive, I'm not talking about like gigabytes, I'm talking about terabytes. Like you have, geez, you actually have 10 times the number of tokens as what was used to train the model. So like, let's say you had a model that was trained on a trillion tokens, you have a 10 trillion embed, uh, like embedding database.[00:33:04] And obviously Google has this because they have all content that ever existed in humanity and they have like the best data set and sort of, they were able to make one of these, uh, embedding databases. But the idea here, which is really cool, is you end. Taking your prompt, computing, the bird, embedding you find out the things that were nearby.[00:33:20] So you do roughly like a semantic search or an embedding search within that. And then you take those, you take the documents that were from those embeddings and you shove those in the model too, in what are called like cross chunked attention. So you like shove them in the model with it as well.[00:33:34] Suddenly now the model is able to take in external. Which is really exciting actually, because suddenly now you're able to get dynamic context in, and the model in some sense is deciding what that context is. It's not deciding it completely. In this case, because the Bert model in this case was actually frozen.[00:33:50] It wasn't trained with the retro model as well, but. The idea is you're somehow adding or augmenting context, which I think is like quite exciting. There's probably two futures. Either context becomes really cheap. Right now it's quadratic. Maybe there's a future where it becomes linear in the, in the size of the context, but the future might actually be the model itself dictates, Hey, I have this context.[00:34:10] You have this data source. Give me this. The model itself is going out into your database and like being like, I want this information, and this is kind of like. What Bing search is looking like. Right? Or bing chat is sort of looking like where it's like I, the model is probably, there's probably some model that's saying I want this information.[00:34:27] And that is getting augmented into the context. Now the model itself knows what context it sort of has and it can sort of like build a state machine of sort of what it needs. And that's probably what the future of this looks like. So you, you[00:34:37] swyx: predict monster embedding database[00:34:39] Varun Mohan: companies? Probably Monster embedding database companies or, yeah.[00:34:43] The model in some sense will need to talk to, Talk to these embedding databases. I'm actually not convinced that the current breed of embedding database companies are like ready for what the future sort of looks like. I think I'm just looking at their pricing, how much it costs per gigabyte and it's prohibitive at the scale we're talking about, like let's say you actually did want to host a 10 terabyte embedding database.[00:35:03] A lot of them were created, let's say two years ago, two, three years ago, where people were like, you know, embedding databases are small and they need to make the cost economics work. But maybe, yeah, there's probably gonna be a big workload there. I will just say for us, we will probably just build this in-house to start with, and that's because I think the technology probably isn't there.[00:35:20] And I think that the technology isn't there yet. Like waiting on point solutions to come up is a lot harder, um, than probably building it up. The way I, I like to think about this is probably the world looks on the LM space. Looks like how the early internet days were, where I think the value was accrued to probably like Google and Google needed to figure out all the crazy things to make their workload work.[00:35:41] And the reason why they weren't able to outsource is, is no one else was feeling the pain. ,[00:35:46] swyx: they're just solving their own pain points. They're just solving their own pain points. They're so far ahead of everyone else. Yes, yes. And just wait[00:35:50] Varun Mohan: for people to catch up. Yes. Yes. And that's maybe different than how things like Snowflake look where the interface has been decided for what SQL looks like 50 years ago.[00:35:58] And because of that, you can go out and build the best database and Yeah, like everyone's gonna be like, this doesn't make my beer taste better. And buy your database basically. That's[00:36:08] swyx: a great reference, by the way. Yeah. We have some friends of the, the pod that are working on embedding database, so we'll try to connect you Toroma[00:36:14] Varun Mohan: and see.[00:36:14] Yeah. Oh, I actually know Anton. I worked with him at Neuro. Oh. Although, there you go. Yeah. Uh, what do you, well, what do you think about, I mean,[00:36:20] swyx: so chromas pivoting towards an embedding[00:36:22] Varun Mohan: database. I think it's an interesting idea. I think it's an interesting idea. I wonder what the early set of workloads that.[00:36:27] They will hit our, and you know what the scaling requirements are. This is maybe the classic thing where like, the teams are great, but you need to pick a workload here that you care about the most. You could build anything. You could build anything. When you're an infrastructure company, you can go in, if I was selling, serving in for, I could build, serving for like linear aggression.[00:36:44] I could build this, but like, unless you hit the right niche for the end user, it's gonna be. . So I think it, I'm excited to see what comes out and if they're great, then we'll use it. Yeah.[00:36:54] swyx: I also like how you slowly equated yourself to Google there. Oh, we're not, we're not Google. You're, you're gonna be the Google of ai.[00:37:00] Varun Mohan: We're definitely, we're definitely not Google. But I was just saying in terms of like, if you look at like the style of companies that came out. Yeah. You know? Absolutely. Or maybe we should live in the cutting edge in[00:37:08] swyx: the future. Yeah. I think that's the pitch.[00:37:10] Varun Mohan: Okay, thanks for b***h us.[00:37:13] Alessio Fanelli: So you just mentioned the older vector embedding source are kind of not made for the L l M generation of compute size.[00:37:21] what does l LM ops look like? You know, which pieces need to be drastically different? Which ones can we recycle?[00:37:27] Varun Mohan: Yeah. One of the things that we've found, like in our own thing of building code that's been just shows how much is missing, and this is the thing where like, I don't know how much of this you can really outsource, which is like we needed to build eval infrastructure.[00:37:40] That means how do you build a great code? And there are things online like human eval, right? And uh, I was telling, which is the benchmark telling Sean about this, the idea of human eval is really neat for code. The idea is you provide a bunch of functions with Docstrings and the eval instead of being, did you predict next token?[00:37:56] It's like, did you generate the entire function and does the function run correctly against a bunch of unit tests? Right. And we've built more sophisticated evals to work on many languages, to work on more variety of code bases. One of the issues that ends up coming up with things like human eval is contam.[00:38:12] Because a lot of these, uh, things that train models end up training on all of GitHub GitHub itself has human eva, so they end up training on that. And then the numbers are tiny, though. It's gonna be tiny, right? But it doesn't matter if it's tiny because it'll just remember it. It'll remember that it's, it's not that it's that precise, but it will, it's like, it's basically like mixing your, your training and validation set.[00:38:32] It's like, oh, yeah, yeah, yeah, yeah. But we've seen cases where like online where someone is like, we have a code model that's like, they we're like, we did this one thing, and HU and human eval jumped a ton and we were just like, huh, did human eval get into your data set? Is that really what happened there?[00:38:46] But we've needed to build all this eval. And what is shown is data cleaning is massive, but data cleaning looks different by. Like code data cleaning is different than what is a high quality piece of code is probably different than what's a high quality legal document. Yeah. And then on top of that, how do you eval this?[00:39:01] How do you also train it at scale at whatever cost you really want to get? But those are things that the end user is either gonna need to solve or someone else is gonna need to solve for them. And I guess maybe one of the things I'm a little bearish on is if another company comes out and solves eval properly for a bunch of different verticals, what was the company that they were selling to really?[00:39:21] What were they really doing at that point? If they themselves were not eval for their own workload and all these other things? I think there are cases where, let's say for code where we probably couldn't outsource our eval, like we wouldn't be able to ship models internally if we didn't know how to eval, but it's clear that there's a lot of different things that people need to take.[00:39:38] Like, Hey, maybe there's an embedding piece. How large is this embedding database actually need to be? But hey, this does look very different than what classic ML ops probably did. Mm-hmm. . How[00:39:47] Alessio Fanelli: do you compare some of these models? Like when you're thinking about model upgrading and making changes, like what does the testing piece of it internally?[00:39:56] Yeah. For us look like.[00:39:56] Varun Mohan: For us, it's like old school AB testing. We've built like infrastructure to be able to say, ramp up users from one to 10 to. 50% and slowly roll things out. This is all classic software, uh, which[00:40:09] swyx: you do in-house. You don't, you don't buy any[00:40:10] Varun Mohan: services. We don't buy services for that.[00:40:13] There are good services, open source services that help you just don't need them. Uh, yeah, I think that's just like not the most complicated thing for us. Sure. Basically. Yeah. Uh, but I think in the future, maybe, we'll, obviously we use things like Google Analytics and all this other stuff, but Yeah. For things of ramping our models, finding out if they're actually better because the eval also doesn't tell the whole story because also for us, Even before generating the prompt, we do a lot of work.[00:40:36] And the only way to know that it's really good across all the languages that our users need to tell us that it's actually good. And, and they tell us by accepting completions. So, so GitHub[00:40:44] swyx: co-pilot, uh, the extension does this thing where they, they like, they'll set a timer and then within like five minutes, 10 minutes, 20 minutes, they'll check in to see if the code is still there.[00:40:54] I thought it was a[00:40:54] Varun Mohan: pretty creative way. It's, it's a very, it's honestly a very creative way. We do do things to see, like in the long term, if people did. Accept or write things that are roughly so because they could accept and then change their minds. They could accept and then change their minds. So we, we are mindful of, of things like that.[00:41:09] But for the most part, the most important metric is at the time, did they actually, did we generate value? And we want to know if that's true. And it's, it's kind of, it's honestly really hard to get signal unless you have like a non-trivial amount of usage, non-trivial, meaning you're getting, you're doing hundreds of thousands of completions, if not millions of completions.[00:41:25] That sounds like, oh wow. Like, that's like a very small amount. But like it's classic. Maybe like if you look at like when I used to be an intern at Quora, like, you know, now more than seven, eight years ago. When I was there, I like shipped a change and then Cora had like millions of daily actives and then it looked like it was good, and then a week later it was just like way worse.[00:41:43] And how is this possible? Like in a given hour we get like hundreds of thousands of interaction, just like, no, you just need way more data. So this is like one of those things where I think having users is like genuinely very valuable to us, basically. Users is all you need. . Yeah.[00:41:59] swyx: Um, by the way, since you brought out Quora, have you tried po any, any thoughts[00:42:03] Varun Mohan: on po I have not actually tried po I've not actually tried.[00:42:05] I[00:42:05] swyx: mean, it seems like a question answering website that's been around for 20 years or something. Would be very, would be very good at question answering. Yeah.[00:42:12] Varun Mohan: Also Adam, the ceo, is like incredibly brilliant. That guy is like insanely smart, so I'm sure they're gonna do,[00:42:18] swyx: they have accidentally built the perfect like data collection company for For qa.[00:42:22] Varun Mohan: Yeah. . It takes a certain kind of person to go and like cannibalize your original company like the in, I mean, it was kinda stagnant for like a few years. Yeah, that's probably true. That's[00:42:31] swyx: probably true. The observation is I feel like you have a bias to its domain specific. , whereas most research is skewed towards, uh, general models, general purpose models.[00:42:40] I don't know if there's like a, a deeper insight here that you wanna go into or, or not, but like, train on all the things, get all the data and you're like, no, no, no. Everyone needs like customized per task,[00:42:49] Varun Mohan: uh, data set. Yeah. I think I'm not gonna. Say that general intelligence is not good. You want a base model that's still really good and that's probably trained on normal text, like a lot of different content.[00:43:00] But I think probably one thing that old school machine learning, even though I'm like the kind of person that says a lot of old school machine learning is just gonna die, is that training on a high quality data set for your workload is, is always gonna yield better results and more, more predictable results.[00:43:15] And I think we are under no illusions that that's not the case. Basical. And[00:43:19] swyx: then the other observation is bandwidth and connectivity, uh, which is not something that people usually think about, but apparently is a, is a big deal. Apparently training agreed in the synchronous needs, high GPU coordination.[00:43:29] These are deleted notes from Sam Altman talking about how they think about training and I was like, oh yeah, that's an insight. And[00:43:34] Varun Mohan: you guys have the same thing. Yeah. So I guess for, for training, you're right in that it is actually nuts to think about how insane the networks are for NVIDIA's most recent hardware, it's.[00:43:46] For the H 100 boxes, you shove eight of these H 100 s on a. Between two nodes. The bandwidth is 3,200 gigabits a second, so 400 gigabytes a second between machines. That's like nuts when you just sit and think about it. That's like double the memory bandwidth of what a CPU has, but it's like between two machines.[00:44:04] On top of that, within the machine, they've created this, this fabric called envy link that allows you to communicate at ultra low latency. That's even lower than P C I E. If you're familiar, that's like the communication protocol. . Yeah, between like the CPU and the other devices or other P C I E devices.[00:44:21] All of this is to make sure that reductions are fast, low latency, and you don't need to think about it. And that's because like a lot of deep learning has sort of evolved. Uh, training has evolved to be synchronous in the OG days. There is a lot of analysis in terms of how good is asynchronous training, which is like, Hey, I have a node, it has a current state of the model.[00:44:39] It's gonna update that itself locally, and it'll like every once in a while, go to another machine and update the weights. But I think like everyone has converged to synchronous. I'm not exactly sure. There's not a lot of good research on asynchronous training right now. Or maybe there is an, I haven't read it.[00:44:52] It's just that there isn't as much research because people are just like, oh, synchronous works. Uh, and the hardware is continually upleveled to handle[00:44:59] swyx: that. Yeah. It was just un unintuitive to me cuz like the whole purpose of GPUs could train things. A lot of things in parallel. Yes.[00:45:05] Varun Mohan: But the crazy thing is also, maybe I can, I can give some dumb math here.[00:45:09] Sure. Here, which is that, uh, let's go with uh, G B T three, which is like 170 billion per. The optimizer state, so while you're training is 14 times the size of the model, so in this case, if it's like 170 billion parameters, it's probably, I'm not great at mental math here, but that's probably around 2.5 terabytes to just store the optimizer state.[00:45:30] That has gotta be sharded across a lot of machines. Like that is not a single gpu. Even if you take an H 100 with 80 gigs to just shard that much, that's like 40, at least 30 machines. So there's like something there where these things need to communicate with each other too.[00:45:44] swyx: You need to vertically scale horizontally.[00:45:46] Varun Mohan: Yeah. You gotta co-located, you gotta somehow feel like you have this massive, the, the ideal programming paradigm is you feel like you have this massive computer. That has no communication, you know, overhead at all, but it has like infinite computer and infinite memory bandwidth.[00:45:59] swyx: That's the AI cluster. Um, okay, well, uh, we want to head to the questions.[00:46:05] Alessio Fanelli: So favorite AI product that you are not[00:46:08] Varun Mohan: building? Yeah, I'm friends with some of the folks at Mid Journey and I really think the Mid Journey product is super cool, especially seeing how the team is iterating and the quality of generations. It consistently gets upleveled. I think it's like quite neat and I think internally at at exa functional, we've been trying out mid Journey for like random content to like generate images and stuff.[00:46:26] Does it bother[00:46:26] swyx: you that they have like a style. I don't know. It, it seems like they're hedging themselves into a particular, like you want mid journey art, you go there.[00:46:33] Varun Mohan: Yeah. It's a brand of art. Yeah, you're right. I think they do have a style, but it seems more predictably good for that style. Okay. So maybe that's too, so just get good at, uh, domain specific thing.[00:46:41] Yeah. Yeah. maybe. Maybe I, maybe I'm just selling, talking to a booker right now. . Yeah. Uh, okay.[00:46:46] swyx: Uh, next question. Uh, favorite AI people and[00:46:48] Varun Mohan: communities? Yeah, so I think I mentioned this before, but I think obviously the open. The opening eye folks are, are insane. Like we, we only have respect for them. But beyond that, I think Elu is a pretty special group.[00:46:59] Especially it's been now probably more than a year and a half since they released like G P T J, which was like back when open source G PT three Curri, which was comparable. And it wasn't like a model where like, It wasn't good. It was like comparable in terms of perplexity to GT three curity and it was trained by a university student actually, and it just showed that, you know, in the end, like I would say pedigree is great, but in if you have people that are motivated know how computers work and they're willing to just get their hands dirty, you can do crazy things and that was a crazy project that gave me more hope.[00:47:34] Decentral training being potentially pretty massive. But I think that was like a very cool thing where a bunch of people just got on Discord and were chatting and they were able to just turn this out. Yeah. I did[00:47:42] swyx: not know this until I looked in further into Luther, but it was not a formal organization.[00:47:45] Was a company was a startup. It's not, yeah. Bunch of guys on Discord.[00:47:48] Varun Mohan: They gotta you, they gotta keep you research grant and they somehow just wrote some codes. .[00:47:52] Alessio Fanelli: Yeah. Yeah. Listen to APAC with Connor, who's the person, and basically Open Eye at the time was like, we cannot release G P T because it's like too good and so bad.[00:48:01] And he was like, He actually said he was sick, so he couldn't leave home for like a, a few weeks. So it was like, what else am I gonna do? And ended up

Papers Read on AI
Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning

Papers Read on AI

Play Episode Listen Later Dec 14, 2022 30:46


Recent work like GPT-3 has demonstrated excellent performance of Zero-Shot and Few-Shot learning on many natural language processing (NLP) tasks by scaling up model size, dataset size and the amount of computation. However, training a model like GPT-3 requires huge amount of computational resources which makes it challengeable to researchers. In this work, we propose a method that incorporates large-scale distributed training performance into model architecture design. With this method, Yuan 1.0, the current largest singleton language model with 245B parameters, achieves excellent performance on thousands GPUs during training, and the state-of-the-art results on NLP tasks. A data processing method is designed to efficiently filter massive amount of raw data. The current largest high-quality Chinese corpus with 5TB high quality texts is built based on this method. 2021: Shaohua Wu, Xudong Zhao, Tong Yu, Rongguo Zhang, C. Shen, Hongli Liu, Feng Li, Hong Zhu, Jiangang Luo, Liang Xu, Xuanwei Zhang https://arxiv.org/pdf/2110.04725v2.pdf

The New Stack Podcast
Kubernetes and Amazon Web Services

The New Stack Podcast

Play Episode Listen Later Nov 17, 2022 30:42


Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company.  In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.The Difference Between Kubernetes and AWSKubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the Amazon Elastic Kubernetes Service (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes. In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project. "It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."What is Kubernetes in AWS?In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is Karpenter, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities. Another tool for Kubernetes users is cdk8s, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the AWS Cloud Development Kit (CDK), which helps users deploy applications using AWS CloudFormation, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.AWS and KubernetesIn addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's container registry, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said. AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation. "By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said.

Ingenios@s de Sistemas
Episodio 102 - ResourceSpace: Implantación

Ingenios@s de Sistemas

Play Episode Listen Later Sep 6, 2022 15:46


Hoy es Martes 6 de Septiembre y seguiremos con episodio dedicado a ResourcSpace, esta magnifica herramienta. Os comente el tema de s3, Forma barata de almacenamiento conectable con Rclone a vuestro VPS. Hasta ahora había usado S3 como destino de backup de Worpress usando el plugin Updraft Plus bien, instalo un ResourceSpace y habilito un S3 de 5 TB, que conecto con Rclone montando este almacenamiento en /media/storage y hago un symbolic link a /resourcespace/filestore. Todo listo, el interface es muy bonito en el versión 9.8, intento la primera carga de contenido y algo no va bien, empiezo a ver el progreso que avanza y retrocede y da finalmente error mmm, pero me ha dejado subir un logo de aplicación y esto va también al filestore tiene toda la pinta de ser un problema de velocidad, S3 es un almacenamiento muy lento, de hecho tiene un interface HTTPS, está montado como una unidad remota de mi servidor VPS Revisé el grupo de Google de Resource Space y vero que hay un plugin para almacenamiento S3, que esta disponible para la versión 9.6, conmuto con el SVN Subversion svn switch 9.6 Instalo el plugin y aunque teóricamente debería funciona no funciona aws y digital ocean, sigo al pie de la letra el documento de configuración pero no consigo que funciones con un alto grado de frustración a estas alturas recuerdo que a partir de unas cuantas versiones atrás hay una variable que permite separar dentro del archvio, la alta calidad de los ficheros de previo leo sobre la variable y me dice que espera que haya dos carpetas en Filestore “original” “resized”. Elimino la carpeta filestore que tengo lincada con /media/storage para crear una nueva carpeta filestore y dentro de esta creó “original” y “resized” hago alguna prueba de carga para ver que en local no hay ningún problema y efectivamente, todo funciona muy bien. Ahora hago una copia temporal de lo que hay en /filestore/original y hago un enlace con el almacenamiento S3 para /media/almacen/original en /filestore/original copia el material que había generado RS en esta carpeta. Realizó pruebas de más subida de videos y funciona!!! a ver, no cantemos victoria probemos con un video grande, 1 GB, dale, sube, va subiendo, es mas lento que en local pero es constante, no tengo idas y venidas de la barra de progreso. configurar el sistema para activar la cola de proceso offline y configurar la variable para que los previos de los archivos que subimos no se hagan en el momento si no en diferido, usando esa cola de trabajo, con esto, consigo usar S3 como almacenamiento externo económico y con un buen rendimiento en búsquedas y por otro lado, gracias a esta nueva versión consigo que los previos no obstaculicen la carga de nuevo contenido, se sube todo y luego se procesa. Conclusión, con un servidor que cuesta 60e al mes tiene un gestor de contenido multimedia privado, con 5TB de almacenamiento, hay muchas empresas que no van a dejar su contenido en manos de plataformas de terceros , así que esta es muy buena implementación para vuestros posibles clientes. También es una posible solución para dar servicio de gestión de contenido y archivo. Si eres un profesional dedicado al marketing, que trabaja con otros clientes, que tiene contenido gráfico, redes sociales YouTube…Puedes usar ResourceSpace como herramienta de comunicación, para subir los videos en los que trabajas. Si te dedicas a editar podcast o videos para terceros, puedes usar ResourceSpace como herramienta de comunicación, para revisiones y control del flujo de trabajo. Si se te ocurre alguna otra forma en la que se puede usar un gestor de contenidos como este, dímelo, escríbeme en https://tecnolitas.com/ideas y cuéntame la tuya

Channel Junkies Podcast
WALK AND TALK #64 Avoid Losing all your Video Files using this... (Yes! I lost all mine once)

Channel Junkies Podcast

Play Episode Listen Later Jul 11, 2022 20:15


Welcome to today's YouTube For Realtors WALK AND TALK #64 - I have so many things going through my head every day about YouTube for Realtors that I just want to get it all out! I get asked so many questions everyday and coach so many agents it's time to answer all of those in depth, on my walk to the gym!In todays episode I do a deep dive into Avoid Losing all your Video Files using this. There are lots of reasons why you can lose video files in your system or device. While some of these problems are avoidable, others are inevitable. This topic is from Leslie. (Shoutout to you homies!) . Question #1, what do you use for storage for your videos and B roll footage? I am starting to do short videos for Facebook and I currently use Dropbox and Google Drive. I have just the basic level of storage on these platforms and it doesn't take much to Max them out and then I'm getting prompted to buy more space. Is this something I am just going to have to bite the bullet and do or is there a better way to store or organize my footage? I leverage Google Drive, it's way easier than Dropbox. Google Drive is the easiest because nobody has to like have it. It's just a link that you can download footage. But yes, your storage space goes very quickly, and that's going to do is either make you start deleting footage to keep that space or have to pay to get more. Now what happened was I started shooting all these videos. Getting all this stuff and what I noticed first and foremost was the more footage I had on my computer. And to answer Leslie, I highly recommend going I think Costco even has one and getting a 5TB or a 10TB external hard drive, they just plug right into your computer and you save every ounce of video to it. That is my number one recommendation. Yes, you can still leverage Google Drive to send files and videos off to editors or anybody that needs it. But then you can erase it and you never have to get that additional storage. Now I pay for the additional storage now. Well, I got three external hard drives. One of them crashed, which is brutal, so I've learned again story form to have at least two external hard drives. I have a mobile one that's five terabytes and I slit it, it's called partitioning. And it's super easy to Google. How to do it, but you go get your 5TB mobile external hard drive and there's like mobile style where you can actually just plug them in on the go they do not need a power source and the bigger ones will hook up to a power source so they got to be plugged into. So I have a 10 terabyte external hard drive that one stays hooked up to my computer. My laptop's always just in my office right there, It's always hooked up I can drag files into it whatever I have every ounce of footage completely labeled. Question #2 What kind of GoPro should I use to shoot video with? I currently use my cell phone. It's kind of old. I get this question all the time. and also "Have you seen the new GoPro ten?" Yeah, it's pretty cool. But I don't even own one. My recommendation just to answer the question is just go get anything that they have at Costco, right now it is like the GoPro Hero Black 8 and they have the nine and 10. I started the whole GoPro vlogging thing for real estate with a GoPro Hero 7 black. At the time they did not make vlogging equipment for the GoPro. So I made this Frankenstein asked thing because I wanted the wide angle lens of the GoPro and the built-in stabilization. And the GoPro had it. Then I found this cheapy cell phone tripod. A weird metal case for the GoPro that you would attach to your chest or a car or something. Got all these nuts and bolts and put a microphone on top of it. And now when you Google or go to Amazon and type in GoPro vlog kit. Question #3 I am interested in the YouTube course for real estate agents that you've built. Is there some time limit of when I have to complete the course or is it once I purchase it? I have access to as long as it takes me. What kind of support is there if I get stuck on something? I ask because I am currently chewing on my way through a pretty big real estate coaching program that somebody else runs the names in there, but I won't say it. And I know the course and yes, it's and a lot of it has to do with paying for ads and social media. And it's very expensive, which you know my take on that ****. If you gotta pay money to run ads to drive people to your social media, your ******* social media sucks. Homie, I'm sorry and that's what these agents have figured out is is traffic. A lot of times they're not showing close deals, and if they are, it's from converting leads. And if you like converting leads then I would recommend those, but if you hate working with a bunch of people who don't want to work with you and leads usually through social media are very cheap. That you're paying for. And I've done it. I did it, it sucked. That's a lot to go through. So anytime you need any additional help or resources, that's what it's all there for. But shoot me that e-mail.Want to see over 500 free videos teaching you YouTube for Real Estate? Go here: https://www.youtube.com/channel/UCVALMF99nztSJEJ7lehzEOQTo learn more about our courses and trainings schedule call here: https://calendly.com/channeljunkiessalesWant to partner with us at eXp and get all our training and coaching free? Tell me more here: https://forms.gle/UrVcNtnSYCR6H1Vd9Or email info@jacksonwilkey.com and say you came from the podcast!

Les Technos
Apple et Samsung : l'amour vache ?

Les Technos

Play Episode Listen Later Jun 28, 2022 15:22


Dans notre Bonus 359 avec Sébastien B. et David. Extradition : Assange a perdu une bataille pour son extradition mais pas encore la guerre. ( https://tinyurl.com/23kfq5qv)Micron : Micron Annonce la 1ère carte MicroSD de 1.5TB. ( https://tinyurl.com/2yzhgat3 | https://tinyurl.com/27l59g4x)Apple : Apple et Samsung à nouveau amoureux pour collaborer sur l'iPhone. ( https://tinyurl.com/2ayybxw7)Intel : NUC "Serpent Canyon". ( https://tinyurl.com/2bwbcl8v)

Les Technos (vidéo)
Apple et Samsung : l’amour vache ?

Les Technos (vidéo)

Play Episode Listen Later Jun 28, 2022 15:22


Dans notre Bonus 359 avec Sébastien B. et David. Extradition : Assange a perdu une bataille pour son extradition mais pas encore la guerre. ( https://tinyurl.com/23kfq5qv) Micron : Micron Annonce la 1ère carte MicroSD de 1.5TB. ( https://tinyurl.com/2yzhgat3 | https://tinyurl.com/27l59g4x) Apple : Apple et Samsung à nouveau amoureux pour collaborer sur l'iPhone. ( https://tinyurl.com/2ayybxw7) Intel : NUC "Serpent Canyon". ( https://tinyurl.com/2bwbcl8v)

Dream Chasers Radio
Get 5TB of space for only $7.95 for the first year - First 100 sign ups

Dream Chasers Radio

Play Episode Listen Later Feb 13, 2022 20:00


We have two limited time offer for your listeners from one of the world leaders in Cloud Backup, iDrive! This offer includes 5TB of cloud storage which is the equivalent to approximately 1 million photos, 1,250,000 songs or 600 hours worth of HD movies! iDrive is also Award winning from the most reputable tech publications, such as PCMag, TechCrunch, Wired and PCWorld and trusted by over 4 million customers! Offer 1: For the first 100 sign-ups 90% off for the first year ($7.95) https://backupmytracks.com

Big Baby's Podcast
Big Baby And Friends Thursday Podcast| Remembering John Madden|

Big Baby's Podcast

Play Episode Listen Later Dec 30, 2021 100:57


On this episode Parlay Pete and I discuss the following:Remembering MaddenNFC Playoff PictureAFC Playoff PictureBaker BlowsParlay Pete's CFB 6 PackAlabama -13.5UGA MLKentucky MLOhio State -4.5Baylor vs Ole Miss Over 55.5OKST vs ND Under 45Parlay Pete's NFL 6 PackSeahawks MLRavens +3.5Chargers MLKC vs Cincy Over 50.5TB -1349ers -12.5

Björeman // Melin
Avsnitt 287: Christian står och steker fisk

Björeman // Melin

Play Episode Listen Later Dec 17, 2021 46:41


Uppföljning/uppvärmning Scorched tanks i förra avsnittet skulle givetvis vara Scorched earth Jocke blockerar paket i pfSense DMZ Retro #5 skickad till tryck Molnbackup - idrive.com har en … eh, drive, igen. 5TB backupyta för $7.95 första året. Jocke sajnade upp sig IDE-kontroller köpt till A2000. Alternativet var att patcha TF-536 och det var inget alternativ Ämnen Blank check - skönt babblig filmpodd. Robocop, Mad Max fury road, Tenet, reklam Allt pseudofilosoferande man gör om stationär kontra bärbar dator, jobbdator kontra privat och så vidare - är det egentligen bara dunster kring det faktum att man helst vill använda sin mest kraftfulla dator hela tiden? Film och TV Mad Max: fury road håller än Engelsk jutuber/fotograf spelade in svenska bäbis-ekorrar när de åt. Ljudet hamnade i re-maken av Dune Hemlock Grove. Jocke har sett, och genomlidit, samtliga tre säsonger. 3,5/5BMÅ Avslutning Julkortsbekymmer Länkar Scorched tanks Scorched earth Ladda ner Schorched Earth Worms Pantscast - Panics pruttande poddapp pfBlocker-NG Cannon fodder 16-bit Memories IDrive.com Arq backup Blank check Blank check om Robocop Blank check om Mad Max: fury road Buddha IDE 20th anniversary edition My squirrel recording is in a movie! Hemlock grove Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-287-christian-star-och-steker-fisk.html.

Hacker News TLDR
[#107] Metaverses, Selling Homes, and Fusion

Hacker News TLDR

Play Episode Listen Later Nov 6, 2021 29:03


We discuss whether the metaverse is bullshit, is the internet, or is minecraft. What's Sam Altman up to today? And how much money can you lose from automating home sales? Links Physics Student Earns PhD at Age 89 The metaverse is bullshit The metaverse is already here, it's called the internet The metaverse is already here – it's Minecraft Review: The 14-inch MacBook Pro Resets the Timeline Google's infamous internal 2010 “I just want to serve 5TB” video now public Now you can (try to) serve five terabytes, too Backblaze S-1 IPO The 3-2-1 Backup Strategy HashiCorp – S1 IPO Zillow to stop flipping homes, loses more than $550M, lays off 25% of staff Keith Rabois's Twitter Helion

Success Unscrambled | Blog Traffic Tips | Business Success Stories
How to Organise Google Drive for Business

Success Unscrambled | Blog Traffic Tips | Business Success Stories

Play Episode Listen Later Oct 4, 2021 14:57


The story is told of a working mom of 3 who wanted to organise Google Drive for business use but she felt a bit stuck. You see Susan had 8 clients and she needed to find a way to manage their data effectively. While Susan used ClickUp to manage her daily and weekly client projects. Each project resulted in content that she created for her clients. As a result, she needed to store that content somewhere to keep it safe. If you ever used ClickUp or any project management software you'll know that it is easy to attach files. It is possible to use the Doc and Table view inside ClickUp to create content. However, if you are creating a lot of content daily, ClickUp can become cluttered quickly making it difficult to easily find and retrieve data. In this post, you'll learn how to organise your Google Drive for business making it simple to manage many clients. Google Workspace vs Google One Before looking at how you can organise Google Drive for business use, the first challenge you'll encounter is a need for more storage. So, let's look at the storage plans available on Google Drive. The free version of Google Drive comes with 15GB of storage. I still remember when, many years ago, 15GB was a lot of storage space. It's interesting how small businesses now have a need to store more data virtually. Google One In case you weren't aware of it Google One is the paid version of Google Drive for personal use. The price of Google One ranges from $19.99 a year (100GB) all the way up to $99.99 a year (2TB). The Google One price plan gives you the ability to add family members as well. Another added benefit is that you'll get access to Google experts on any of the paid plans. Google Workspace Small business owners with a team of assistants may prefer Google Workspace formerly G Suite. This is because it is the paid version of Google Drive designed for business use. Prices for these plans range from $6.00 (30GB) per user per month up to $20.00 (5TB) per user per month. The business benefits include video meetings and recordings, security and management control, custom and secure business email. At the end of the day, it really depends on your business needs to choose between Google One or Google Workspace. Mapping Out Your Drive Hierarchy If you plan to hire a team or build out an agency, it is super important to map out the structure of your folders. Let's look at three different examples of service businesses using Google Drive. The 3 service businesses I'm going to feature are: Web Design AgencySocial Media ManagementLaunch Management Agency Let's look at each one in turn so that you'll understand how to organise Google Drive for business. Web Design Agency Any web design agency will tell you that they have a specific process for onboarding their clients. When a client signs up with a web design agency they'll need to sign a contract and complete an intake form. The client would also need to supply a branding guide as well as images that they want to include on the website. There'll also be a need for other content like copywriting unless the agency will be providing it as part of the package. Finally, when the web designer does a mockup of the site using a wireframe these need to be stored somewhere as well. So far, here are the folders needed client work. ContractIntake formBranding ElementsPhotographyCopyWireframesOff boarding Let's look at a different example in order to help you with your structure. Social Media Management Before looking at the files and folders needed for a social media manager it is important to understand what that person does. A social media manager designs the overall social strategy for a business. This means deciding what to post on social media, how often, hashtag research, key messaging, business goals and the overall esthetics of social profile.

All TWiT.tv Shows (Video HD)
This Week in Google 629: Tastes Like Pikachu

All TWiT.tv Shows (Video HD)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

Total Ant (Audio)
This Week in Google 629: Tastes Like Pikachu

Total Ant (Audio)

Play Episode Listen Later Sep 16, 2021 152:05


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

All TWiT.tv Shows (Video HI)
This Week in Google 629: Tastes Like Pikachu

All TWiT.tv Shows (Video HI)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

Total Ant (Video)
This Week in Google 629: Tastes Like Pikachu

Total Ant (Video)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

This Week in Google (Video LO)
TWiG 629: Tastes Like Pikachu - Amazon's big raise, Instagram's impact on teens, update Chrome now!

This Week in Google (Video LO)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

This Week in Google (Video HD)
TWiG 629: Tastes Like Pikachu - Amazon's big raise, Instagram's impact on teens, update Chrome now!

This Week in Google (Video HD)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

All TWiT.tv Shows (Video LO)
This Week in Google 629: Tastes Like Pikachu

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

Radio Leo (Audio)
This Week in Google 629: Tastes Like Pikachu

Radio Leo (Audio)

Play Episode Listen Later Sep 16, 2021 152:05


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

This Week in Google (Video HI)
TWiG 629: Tastes Like Pikachu - Amazon's big raise, Instagram's impact on teens, update Chrome now!

This Week in Google (Video HI)

Play Episode Listen Later Sep 16, 2021 152:48


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

This Week in Google (MP3)
TWiG 629: Tastes Like Pikachu - Amazon's big raise, Instagram's impact on teens, update Chrome now!

This Week in Google (MP3)

Play Episode Listen Later Sep 16, 2021 152:05


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

All TWiT.tv Shows (MP3)
This Week in Google 629: Tastes Like Pikachu

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 16, 2021 152:05


Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Facebook sent flawed data to misinformation researchers. Instagram's Met Gala 2021 Table Took the Best Class Photo With Meg Thee Stallion, Saweetie & More. Announcing the 2021 Ig Nobel Prize winners. Intuit is buying Mailchimp for $12 billion to focus on small businesses. Ryan Reynolds and Will Ferrell posted their own take on TikTok's a cappella 'Grace Kelly' challenge. How to watch the Inspiration4 launch, SpaceX's first fully private mission to space. Amazon boosts hourly pay to over $18, to hire 125,000 workers. Amazon to pay full college tuition for front-line employees. Amazon Launches Its First TVs: Fire TV Omni Series with 4K Ultra HD. Amazon CEO Andy Jassy says it's 'hard to argue' its retail business is a monopoly. Amazon CEO Andy Jassy: 'It's still early days for us in media'. The next Big Tech battle: Amazon's bet on healthcare begins to take shape. Amazon gives Kindle e-readers a rare user interface overhaul. Update Google Chrome Immediately. South Korea Fines Google for Abusing Smartphone Dominance. Otherworldly 'time crystal' made inside Google quantum computer could change physics forever. Google's voice assistant in new EU antitrust investigation, MLex reports. Germany's 'sovereign cloud' is coming—and it's provided by Google. Google finishes laying a giant undersea internet cable stretching 3,300 miles from New York to the UK and Spain. Google hypes the Pixel 6 in Japan with bag of 'Google Original [Potato] Chips'. Google hosting 'Search On' 2021 keynote later this month. Hands-on with Google Assistant Driving Mode's long-awaited home screen UI. Google Photos will now ship individual prints directly to your door. Google search is finally officially getting dark mode on desktop. Area 120's 'Museletter' turns your Google Docs, Sheets, & Slides into paid newsletters. Google now lets you ask to join 'Pixel Superfans' with a simple sign-up form. Google One quietly adds 5TB storage plan for $24.99/month following Photos changes. Picks: Stacey - Luci for making power wheelchairs smart Jeff - 12 billion doses in arms are required to get the majority of adults vaccinated. We stand at about 5.5 billion & have been averaging 1 billion/month Jeff - Pixel Buds Series A Jeff - Back to the office Ant - Wanderers Photo Workshop Moved Ant - Dry Bar is Underrated and also Greg Morton Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: nureva.com/twit CrowdStrike.com/twit wealthfront.com/twig

Web and Beyond Live
Why Super Apps Aren't Likely Coming to the US and What Small Business Can Learn From Them, and More Small Business Digital Marketing News This Week

Web and Beyond Live

Play Episode Listen Later Sep 14, 2021 25:07


Why Super Apps Aren't Likely Coming to the US and What Small Business Can Learn From Them, and More Small Business Digital Marketing News This Week - Web and Beyond Live - September, 13 2021 In this week's episode: From zippers to glass, shortages of basic goods hobble U.S. economy | Reuters Some businesses welcome Biden's vaccination mandate while others worry about the costs, effects on worker shortages Google One adds a 5TB storage plan for $24.99 per month - The Verge National Small Business Week Offers Virtual Summit For Entrepreneurs National Small Business Week Virtual Summit Small Business Summit | TheHill Small Business Administration quadruples EIDL loan cap to $2M | Restaurant Dive Facebook to buy $100 million of invoices from diverse-owned businesses Walmart to accept litecoin payments | Reuters Alibaba Debuts Dropshipping Tools, SMB Grants | PYMNTS.com Alipay: Alibaba stock tumbles after report says Beijing wants to break up popular payment app - CNN What is a super app, and why haven't they gone global? | CNBC Explains - YouTube Join us in Web and Beyond Community! It's easy and free. Each week, President of W3 Consulting, Managing Director of W3C Web Services (https://web.w3cinc.com/) and host of Web and Beyond Community (https://www.webandbeyond.community) Ray Sidney-Smith broadcasts live to update you on the latest small business digital marketing and business productivity technology updates you need to be effective. --- Send in a voice message: https://anchor.fm/webandbeyondlive/message

Anything But Idle
Can Google do Everything?

Anything But Idle

Play Episode Listen Later Sep 14, 2021 55:25


Can Google do Everything? Google Upgrades, Rooms, Spaces and more  and the Productivity News This Week (If you're reading this in a podcast directory/app, please visit https://anythingbutidle.com for clickable links and the full show notes and transcript of this cast.) Enjoy! Give us feedback! And, thanks for listening! If you'd like to continue discussing any news from this episode, please click here to leave a comment down below (this jumps you to the bottom of the post). In this Cast | Can Google do Everything? Ray Sidney-Smith Augusto Pinaud Headlines & Show Notes | Can Google do Everything? Resources we mention, including links to them, will be provided here. Please listen to the episode for context. Google Calendar will break down how much of your work is spent in meetings How to measure productivity Study of Microsoft employees shows how remote work puts productivity and innovation at risk - GeekWire The Framework Laptop is the future of laptops — and that's why I'm buying one Lenovo is rebooting its Chromebook Duet with a 13.3 ... - The Verge Here's how you can install Cursive – Google's new note taking app – right now Google One adds a 5TB storage plan for $24.99 per month Productivity Resource of the Week Fibery Anyfont Featured Story of the Week Google Meet gets new videoconference hardware with interactive displaysGoogle's rename of ‘Rooms' in Chat and Gmail to ‘Spaces' is now underwayGoogle Workspace goes all-in on hybrid work with Playspace acquisition | Android CentralThe Gmail app takes calls now, too, because Google wants it to do everything - The VergeGoogle debuts Meet features, including 'Companion mode' | VentureBeat Announcements Apple Event: California Streaming on 9/14/21ABI 080 Apple Special Event: California Streaming with Michael SliwinskiMicrosoft Event: 9/22/21ABI 082 Microsoft Special Event: California Streaming with Art Gelwicks Locast Shutting Down After Losing Court Battle With TV Networks Other News Apple Releases Safari Technology Preview 131 With Bug Fixes and Performance Improvements Google brings Samsung 5G modem tech to U.S. market with new Pixel phone -sources | ReutersMicrosoft acquires video-editing software start-up ClipchampApple CSAM controversy continues; Snowden chimes inAsana beats expectations for Q2, reports record revenue | ZDNetApple announces first states for Wallet drivers licenses, IDsMozilla wants to make its password manager obsolete with Firefox 93 Beta (APK Download)Notion acquires India's Automate.io in push to accelerate product expansion – TechCrunchGoogle Drive rolls out a nifty feature for offline access to everyone | Android CentralGoogle search is finally officially getting dark mode on desktopWinZip is expanding its roster of Windows productivity apps in a big way | Windows CentralSwiftKey's latest beta works a lot faster with ... - Android Policehttps://www.nytimes.com/2021/08/31/upshot/the-winners-of-remote-work.htmlYahoo Finance: Microsoft is rolling out a new Teams features to deal with the hybrid work explosion Raw Text Transcript | Can Google do Everything? Raw, unedited and machine-produced text transcript so there may be substantial errors, but you can search for specific points in the episode to jump to, or to reference back to at a later date and time, by keywords or key phrases. The time coding is mm:ss (e.g., 0:04 starts at 4 seconds into the cast's audio). Read More Raymond Sidney-Smith 0:00 Hello personal productivity enthusiasts and community Welcome to Anything But Idle. The Productivity news podcast. Today's show is brought to you by co working space by personal productivity club. I'm Ray Sidney-Smith. Augusto Pinaud 0:13 And I'm Augusto Pinaud. Raymond Sidney-Smith 0:14 And we're your hosts for Anything But Idle. This is Episode 79 09 for September 13 2021. This is all about Google's upgrades to their systems with rooms,

9to5Google Daily
Assistant Driving Mode comes to Android 12, Google working on Handoff-style 'Push' feature for Android/Chromebooks, plus more

9to5Google Daily

Play Episode Listen Later Sep 13, 2021 8:09


Listen to a short-form recap or roundup of all the top 9to5Google stories of the previous 24 hours. 9to5Google Daily is available on Spotify, Google Podcasts, Amazon, iTunes and Apple's Podcasts app, Stitcher, or through our dedicated RSS feed for Pocket Casts and other podcast players. New episodes of 9to5Google Daily are recorded every weekday. Subscribe to our podcast in Google Podcasts or your favorite podcast player to guarantee new episodes are delivered as soon as they're available. Why not add the 9to5Google Daily to your Google Assistant Routine for a quick morning update? Learn how to add us directly to your Assistant Routines right here. Follow Damien: Damien Wilde Stories discussed in this episode: Assistant Driving Mode on Android 12 updated with homescreen promised at Google I/O 2019 Google working on Handoff-like ‘Push' feature for Android & Chrome OS, starting on Pixel Google advertising Pixel 6 & Pro camera in the UK with high-profile TV show sponsorship Google One quietly adds 5TB storage plan for $24.99/month following Photos changes Drop us a line at gtips@9to5g.com. You can also rate us in Google Podcasts, Spotify, Apple Podcasts or recommend us in Pocket Casts to help more people discover the show!

Mscs Media
What happened to John McAfee? | Miami Building Collapse have 3.5 TB? | Matthew Cox | MSCS MEDIA #94

Mscs Media

Play Episode Listen Later Jul 21, 2021 70:28


What happened to John McAfee? Full Interview Video: https://youtu.be/eeWQLvcoZrw or youtube.com/mscsmedia/videos | Miami Building Collapse have 3.5 TB? | Matthew Cox | MSCS MEDIA #94What really happened to John McAfee? Suicide? Was it a hit due to the information he allegedly had? Was the Miami collapse due to the fact McAfee had over 3.5TB of information in there? McAfee is on the run in Belize. John McAfee's final tweet and more. Matthew B Cox and author, artist, largest mortgage fraudster in the USA in history if not still. John McAfee is founders John David McAfee is a computer programmer and businessman. He founded the software company McAfee Associates in 1987, then sold for 400million in 94, which was then sold later for over a billion. Stay in touch with MSCS MEDIA:Follow me on:Please subscribe: https://bit.ly/30rUAEd​linktr.ee: https://linktr.ee/mscsmedia​IG: https://www.instagram.com/mscsmeida/​​Check out Matthew Cox / True Crime:Books:https://amzn.to/3gyD4XqOnce a Gun Runner...: The Efraim Diveroli Memoir ( The actual War Dogs Story )The Program: How a Con Man Survived the Federal Bureau of Prisons' Cult of RDAPShark in the Housing Pool ( Matthews actual story ) Frank Amodeo, It's Insanity: The Bizarre Story of a Bipolar Megalomaniac's Insane Plan for Total World Domination ( we did a podcast all about this ) Bent: How a Homeless Teen Became one of the Cybercrime Industry's Most Prolific Counterfeiters (John Boseak)And many more IG: COXPOPART (Matthews Art which you can purchase through DM it is good) https://bit.ly/3esifdxYouTube: COX POP ART - https://bit.ly/3azZHqJIG: INSIDE TRUE CRIME - https://bit.ly/3nbXNBMYouTube: MATTHEW COX & INSIDE TRUE CRIME - https://bit.ly/3gyE6mg➔ Stay Connected With MSCS MEDIA► Subscribe: https://cutt.ly/EmO2VNO► Linktr.ee: https://linktr.ee/mscsmedia​► Instagram: https://www.instagram.com/mscsmedia/​​

snobOS
Episode 115: Rat Race By Another Name

snobOS

Play Episode Listen Later Mar 5, 2021 65:15


Welcome to Episode 115 of the snobOS Podcast!Up to 6% Cash Back on Apple Card. Find My adds 'Items' tab in latest iOS 14.5 beta. NASA's new Mars rover is powered by a decades-old Mac chip2nd String: Amazon quickly changes “Hitler mustache” app icon. Twitter brings human eyes to tracking down COVID vaccine misinformation. Instagram accidentally hid likes for a few hours and everyone freaked out. For The Culture: Texas & Mississippi is relaxing COVID restrictions and Georgia is hosting super-spreader NBA All-Star Weekend' What does this mean for the rest of the country?The Hookup: Deal on 5TB portable external HD for ‘archive files'Be sure to Listen, rate, review and share on Apple Podcasts, Google Podcasts & SpotifyEngage on all social platforms @snobOScast Leave comments and suggestionsWeb: snobOScast.comEmail: snobOScast@gmail.comSupport the Patreon: https://www.patreon.com/snobOScastFollow Nica Montford @TechSavvyDivaFollow Terrance Gaines @BrothaTech

bp-Affairs
スパコン『富岳』に150PB超かつ1.5TB:秒のストレージを

bp-Affairs

Play Episode Listen Later Nov 28, 2020 2:43


スパコン『富岳』に150PB超かつ1.5TB:秒のストレージを by bp-Affairs

Bellingham Podcast
Ep. 171 "BellingHOME your Data" #tech

Bellingham Podcast

Play Episode Listen Later Nov 16, 2020 39:07


For the week of November 15, 2020Housekeeping * Chris’ interview with the Whatcom Dads Podcast (https://overcast.fm/+lZQ1OUAdk)Bringing your data home BellingHOME your data* Drives driving you mad?* Data bartering; and the price our data is actually worth*NAS vs DAS** Speed and spec * Even at $1000 real money on drives and a NAS; things are only as fast as your internal/external internal connection and the drives themselves. * How much data do you have and how long IT WILL take to get it on, indexed, and secured on a NAS * This geek took the better part of 2 weeks to migrate just short of 5TB of data * RAID and what that means for a NON geek.* Brands * Synology * QNAP * Drobo * WD MyCloud * Beyond storage * VPN /Email/ Contacts /Calendar * Video/Photo localized hosting * Plex * Website host * Dropbox replacement * Google Docs replacement *Bham Bingo*Slight edit- while we were recording Wa Gov. Inslee announced sweeping changes to our state to help reduce rising covid cases. So as things are changing we are skipping a recommendation of a local place for food this episode... https://www.seattletimes.com/seattle-news/health/gov-inslee-orders-sweeping-restrictions-on-indoor-gatherings-restaurants-bars-gyms-as-covid-19-cases-surge-in-washington-state/ Quality Assurance*Chris - After seeing the new shiny iPhone products released this fall, Minimalism documentary director and Youtube Vlogger Matt D’Avella is switching to a flip phone - https://www.youtube.com/watch?v=4nX5jwa2QTs** AJ * * TrueSpies Podcast; https://spyscape.com/podcast/

Sonic Storytellers: Screen Music Business On the Go
Ep 17: How to Organize Computer Files as a Game Composer Across 6 Drives

Sonic Storytellers: Screen Music Business On the Go

Play Episode Listen Later Oct 28, 2020 15:10


Let's explore my 6 computer drives (x2 internal SSD, x4 external HHD) and how I organize them for maximum efficiency as a music composer on a budget PC...all for instantaneous loading times. ⚡ My favorite internal SSD: https://amzn.to/35Bhr1z ⚡ My favorite external HDD, USB 3.0: https://amzn.to/34t5GuL ⚡ My favorite backup HDD with power supply: https://amzn.to/3mlKwF6 *These Amazon Affiliate links support my business at no additional cost to you. Thanks for you support!   MY DRIVE BREAKDOWN: Windows: 1 TB internal SSDSamples: 2TB internal SSD Samples 2: 5TB external HDD Projects: 2TB external HDD Media: 2TB external HDD Backup: 3TB HDD with power supply ✅ Download my free guide 25 Questions Every Composer Asks ($17 value!): https://www.stevenmelin.com/25questions

CBS 시사자키 정관용입니다
[20/06/15 전체듣기] 더불어민주당 김경협 의원 "종전선언 촉구" / [직격토론] 상임위 논란과 삐라 공방 / "전국민 개인정보 유출의 진상"

CBS 시사자키 정관용입니다

Play Episode Listen Later Jun 15, 2020 72:50


1부 (오후 6:25-6:55)◎ 시사자키 HOT 7“국회 본회의, 법사위 등 6개 상임위만 선출”- 뉴스원 김윤경 기자- 김보협 기자- CBS 서연미 아나운서◎ 현장 인터뷰“한반도 종전선언 촉구 결의안, 173명 참여”- 더불어민주당 김경협 의원◎ 뉴스사이다“삼성의 빅픽처? 수사심의위원장 양창수”2부 (오후 7:05-7:50)◎ 직격토론“상임위 논란과 삐라 공방”- 이재오 전 의원- 박지훈 변호사◎ 고공비행“일본 군함도, 강제노역 역사 완전 왜곡하다니…”- 김민하 기자◎ 이슈 인터뷰“전국민 금융·개인 정보 1.5TB 유출의 진상 ”- 고려대 정보보호대학원 김승주 교수

이진우의 손에 잡히는 경제
6/16(화) “1.5TB 카드 정보 유출, 내 카드 안전한가?”,“주민등록등본에 등장하는 사람들이 중요한 이유”,"코스피, 4% 폭락"

이진우의 손에 잡히는 경제

Play Episode Listen Later Jun 15, 2020 22:34


- 코스피, 1조 동학개미에도 4% 폭락 - 주식에 양도세 부과 추진한다 - 文정부 21번째 부동산 대책…갭투자 차단에 집중할 듯 - “막차 타자”..추가 규제 예고에 ‘로또 청약' 광풍 고란 기자(조인디) “주민등록등본에 등장하는 사람들이 중요한 이유” “1.5TB 카드 정보 유출, 내 카드 안전한가?” 김승주 교수(고려대 정보보호대학원)

Okrągły podkastół
#134 - Ponad 2500 podkastów w Polsce

Okrągły podkastół

Play Episode Listen Later Apr 15, 2020 28:58


Strona podkastu i komentarze https://www.facebook.com/groups/podkastingwpolsce/permalink/731380170731491/ Notatki: #134 - Mamy ponad 2500 podkastów w Polsce 2517 - 2421 = 96 nowych podkastów w ostatnim tygodniu w katalogu. Okładki widoczne na stronie głównej podkasty.info. Sporo od Radia Katowice Ile czasu zajmuje produkcja odcinka podkastu? https://www.facebook.com/groups/podkastingwpolsce/permalink/730551547481020/ Umarł Whooshkaa niech żyje Anchor https://anchor.fm/features Chomikuj robi porządki Amazon S3 - sprostowanie - opłata za używane GB i transef a nie za minumum 5TB - słabo to widać w zestawieniu planów Karta Podkastu --- Send in a voice message: https://anchor.fm/podkasting/message

JanniTech
El Mac Pro (MAC Pro VS Tesla Cybertruck)

JanniTech

Play Episode Listen Later Dec 20, 2019 19:09


Hoy hablaremos sobre el nuevo Max Pro que vale como un Cybertruck pero tiene 28 núcleos y 1,5TB de RAM GDDR4. Y también hablaremos sobre los AirPods Pro y el iPhone 11 Pro.

A Messenger from Wednesday
ボイスアヤノ.メ vol.33 [A Messenger from Wednesday] (2019/12/18)

A Messenger from Wednesday

Play Episode Listen Later Dec 18, 2019 42:16


ボイスアヤノ.メ vol.33 A Messenger from Wednesday [TODAY'S] History repeats itself. [今週のアヤノ.メ] Mac Proが発売開始!メモリ1.5TBの有効な使い方を考える https://ayano.me/archives/10207 QUESTIONより:残酷な真実を告げるのと優しい嘘をつくの、どちらが正義か https://ayano.me/archives/10231 新国立競技場が完成したけど、結局屋根はどうなる?陸上トラックは?空調は? https://ayano.me/archives/10238 presented by AYANO.ME ---

Unsupervised Learning
Unsupervised Learning: No. 187

Unsupervised Learning

Play Episode Listen Later Jul 22, 2019 35:09


Lots of people in the security community went silly over the FaceApp application last week, basically saying that you shouldn't be using the application because they'll steal your face and then be able to impersonate you. Oh, and then it turned out to be a Russian company who put out the application, and that made it 100x worse. The problem here is the lack of Threat Model Thinking. When it comes to election security, propaganda discussions, etc., I am quite concerned about Putin's willingness and ability to harm our country's cohesion through memes and social media. But that does not extend to some random company stealing faces. Why? Because before you can get legitimately concerned about something, you have to be able to describe a threat scenario in which that thing becomes dangerous. As I talked about in this piece, pictures of your face are not the same as your face when it comes to biometric authentication. There's a reason companies need a specific device, combined with their custom algorithm, in order to enroll you in a facial identification system. They scan you in a very specific way and then store your data (which is just a representation, not your actual face) in a very specific way. Then they need to use that same exact system to scan you again, so they can compare the two representations to each other. That isn't happening with random apps that have pictures of you. And even if that were the case, they could just get your face off your social media, where those same people who are worried are more than happy to take selfies, put their pictures on profile pictures, and make sure as many people see them as possible. There are actual negative things that can be done with images (like making Deepfakes of you), and that will get easier over time, but the defense for that is to have zero pictures of you…anywhere. And once again you have to ask who would be doing that to you, and why. Bottom line: authentication systems take special effort to try to ensure that the input given is the same as the enrollment item, e.g., (face, fingerprint, etc.), so it will not be easy any time soon to go from a random picture to something that can full a face scanner or fingerprint reader at the airport. People reading this probably already know this, but spread the word: threat modeling is one of our best tools for removing emotion from risk management. A contractor named SyTech that does work with Russian FSB has been breached, resulting in the release of 7.5TB of data on the FSB's various projects. This is obviously embarrassing for SyTech and the FSB, but the leaked projects focused on de-anonymization, spying on Russian businesses, and the project to break Russia away from the Internet, which are all known and expected efforts. So there don't seem to be any big reveals as a result of the leak. More Someone discovered that a bunch of browser extensions were reading things they shouldn't be, and sending them out to places they shouldn't be. This is not surprising to me. Chrome extensions are like Android apps, which should tell you all you need to know about installing random ones that seem interesting. My policy on browser extensions is extremely strict for this reason. People need to understand how insane the entire idea of the modern web is. We're visiting URLs that are executing code on our machines. And not just code from that website, but code from thousands of other websites in an average browsing session. It's a garbage fire. And the only defense really is to question how much you trust your browser, your operating system, and the original site you're visiting. But even then you're still exposing yourself to significant and continuously-evolving risk when you run around clicking things online. And the worst possible thing you can do in this situation is install more functionality, which gives more parties, more access, to that giant stack of assumptions you're making just by using a web browser. The best possible stance is

Production Expert Podcast
373 - Is The New Mac Pro That Expensive? We Discuss - Production Expert Podcast

Production Expert Podcast

Play Episode Listen Later Jun 11, 2019 48:36


In the Production Expert podcast Mike, James and Alan talk about the whether the new Mac Pro 7,1, with 8 PCI-e card slots, up to 28 cores, 1.5TB of RAM is too expensive and take a detailed look at what will be inside, the components Apple have selected, the comparisons people are making with other computers and whether they would buy one or not. They also share their finds of the week.

Circle Bros.
Circle Bros. Podcast June 9 2019

Circle Bros.

Play Episode Listen Later Jun 10, 2019 65:09


Show Notes Movie News: Warner Brothers rebooting the DCEU into two separate universes Post from Wegotthis.com https://wegotthiscovered.com/movies/wb-rebooting-entire-dceu/ TV News: D.C. Streaming service being evaluated by Re-Evaluated by ATT and Time Warner https://www.newsarama.com/45474-dc-universe-streaming-service-being-reevaluated-by-at-t-warner-media-report.html WWDC: Apple unveiled iOS13 that featured the highly anticipated “Dark Mode” for iPhone users. Apple also unveiled a new operating system for iPad’s called “iPadOS”. This bridges the gap between iOS and MacOS. iPad users will be able to use a mouse with an iPad. App developers can finally design applications for iOS, iPadOS, and MacOS at the same time with the new Xcode update. Apple unveiled a new Mac Pro that features up to 28 cores, up to 1.5TB of RAM, and multiple GPU’s. Movie News: We discuss reviews for Dark Phoenix and talk about how much we thought the movie was going to suck. Gaming News: THQ Nordic announced Spongebob Squarepants Battle for Bikini Bottom: Rehydrated A complete remake of the original game with new levels that were cut from the original game. Cyber Punk 2077 gets an April 16, 2020 release date. Keanu Reeves is also in the game. Final note: E3 is this week!!!! Prepare yourself.... for disappointment? -------------------------------------------------------------------------------------------------------------------------------- Thank you for supporting the Circle Brothers! Please like, share and subscribe on Apple podcast, Spotify, YouTube and Soundcloud. Apple Podcast https://podcasts.apple.com/us/podcast/circle-bros/id1466171025 Spotify https://open.spotify.com/show/2taoamBoAT2Mb8idGyEPDw?si=jw3LiZI5RTO_Ww1-o9ib4A YouTube https://www.youtube.com/channel/UCVP0jvRAXADb9mL2Z2K80Pg Music Provided by Primitive SoundLab “Primitive SoundLab is a dope producer…….Period.” https://instagram.com/theprimitivesoundlab?igshid=1v3q28bimj7sm https://m.facebook.com/ThePrimitiveSoundLab/

极客公园:科技 互联网 奇酷探秘
【早报】WWDC回顾:苹果第五大系统诞生;最亲民的KAWS来了,三秒售罄

极客公园:科技 互联网 奇酷探秘

Play Episode Listen Later Jun 4, 2019 4:46


今天凌晨,一年一度的WWDC 2019发布会举行。本次发布会上,苹果原有的四大系统Mac OS、iOS、TV OS、Watch OS都迎来了更新。不仅如此,苹果还给iPad带来了独立的系统,苹果第五大系统自此诞生。这使iPad摆脱了之前iOS系统的限制,独立性更强。根据现场演示,装载了iPad OS系统的iPad在文本和图片处理上的功能十分突出。iPad OS支持同屏多任务功能,使文本的分屏操作成为可能。另外,今年的WWDC还发布了新款的Mac Pro。新款Mac Pro搭载最高28核英特尔Xeon CPU,1.5TB系统内存,很多3D建模软件将可在Mac Pro上运行。这块号称史上最强的Mac Pro起售价5999美元。苹果还发布了新版Apple Pro XDR显示器,大小为32英寸,分辨率达到6K,起售价4999美元。两者都将在今年秋天上市。 除了接入支付宝芝麻信用平台,“弹个车”的汽车租赁又上了一层保险。近日,大搜车与中国银行 浙江省分行签署战略合作协议。根据协议,中国银行将为大搜车旗下融资租赁服务品牌“弹个车”提供综合授信,建立汽车融资租赁风控体系。用户可在线上查询征信,60秒内即可获知审批结果,同时,银行智能化风险决策引擎,实现自动化的全程风险监测。 优衣库KAWS联名款昨天在天猫618上架3秒即告售罄,而这款商品也带动优衣库品牌搜索量在天猫增长37倍。此次,KAWS和优衣库合作的“KAWS:SUMMER”系列产品包括12款成人T恤、6款童装和4款帆布包,这将是KAWS最后一次与优衣库的合作系列。Kaws是一位美国街头艺术家,代表作品主要有《回家路漫漫》。2006 年创办街头潮流品牌Original Fake。 此前有消息称趣头条总编辑肖厚君已于上周离职,对此,趣头条方面表示:趣头条总编辑肖厚君因个人原因已于上周离职,感谢他在职期间为趣头条所作出的贡献;趣头条已提前进行相关调整,肖厚君的离职对现有业务没有任何影响;趣头条总编辑已有人选。 阿拉丁小程序数据统计平台发布了2019年5月微信小程度报告,生活服务类小程序上榜率首次超过了游戏类小程序,成为榜单中的第二大品类。同时,新上榜的小程序行业分布更加均衡,主要集中在游戏、视频、工具、生活服务等11个领域,各个领域的新上榜数量差距缩小。小游戏的细分品类也更加丰富,覆盖了赛车、竞技、经营策略和动作冒险等品类。 触宝昨天公布了2019年一季度财报。财报显示,触宝一季度净收入为4004万美元,同比增长83%;全球日活1.69亿,同比增长40%。业务层面来看,触宝在内容生态和广告生态均有大动作,推出了自有广告平台,并将内容业务进行了打通整合,发挥协同效应,从而推动用户数量增长和全球业务扩张。 LG昨天正式在韩国开放了全球首款8K OLED电视的预定。LG为88Z9配备了88英寸的OLED面板,拥有8K超高清分辨率,像素数量是4K UHD的四倍。鉴于目前市面上暂缺8K 内容,且短期内资源不会太丰富,因此LG 配备了第二代Alpha 9 8K 处理器,能够在深度学习算法的帮助下,将图像提升到更高的分辨率。当然,强悍功能的背后,就是惊人高的售价。88Z9 8K OLED TV在韩国市场的零售价为5000万韩元,约合29.2万元人民币。

极客公园:科技 互联网 奇酷探秘
【早报】WWDC回顾:苹果第五大系统诞生;最亲民的KAWS来了,三秒售罄

极客公园:科技 互联网 奇酷探秘

Play Episode Listen Later Jun 4, 2019 4:46


今天凌晨,一年一度的WWDC 2019发布会举行。本次发布会上,苹果原有的四大系统Mac OS、iOS、TV OS、Watch OS都迎来了更新。不仅如此,苹果还给iPad带来了独立的系统,苹果第五大系统自此诞生。这使iPad摆脱了之前iOS系统的限制,独立性更强。根据现场演示,装载了iPad OS系统的iPad在文本和图片处理上的功能十分突出。iPad OS支持同屏多任务功能,使文本的分屏操作成为可能。另外,今年的WWDC还发布了新款的Mac Pro。新款Mac Pro搭载最高28核英特尔Xeon CPU,1.5TB系统内存,很多3D建模软件将可在Mac Pro上运行。这块号称史上最强的Mac Pro起售价5999美元。苹果还发布了新版Apple Pro XDR显示器,大小为32英寸,分辨率达到6K,起售价4999美元。两者都将在今年秋天上市。 除了接入支付宝芝麻信用平台,“弹个车”的汽车租赁又上了一层保险。近日,大搜车与中国银行 浙江省分行签署战略合作协议。根据协议,中国银行将为大搜车旗下融资租赁服务品牌“弹个车”提供综合授信,建立汽车融资租赁风控体系。用户可在线上查询征信,60秒内即可获知审批结果,同时,银行智能化风险决策引擎,实现自动化的全程风险监测。 优衣库KAWS联名款昨天在天猫618上架3秒即告售罄,而这款商品也带动优衣库品牌搜索量在天猫增长37倍。此次,KAWS和优衣库合作的“KAWS:SUMMER”系列产品包括12款成人T恤、6款童装和4款帆布包,这将是KAWS最后一次与优衣库的合作系列。Kaws是一位美国街头艺术家,代表作品主要有《回家路漫漫》。2006 年创办街头潮流品牌Original Fake。 此前有消息称趣头条总编辑肖厚君已于上周离职,对此,趣头条方面表示:趣头条总编辑肖厚君因个人原因已于上周离职,感谢他在职期间为趣头条所作出的贡献;趣头条已提前进行相关调整,肖厚君的离职对现有业务没有任何影响;趣头条总编辑已有人选。 阿拉丁小程序数据统计平台发布了2019年5月微信小程度报告,生活服务类小程序上榜率首次超过了游戏类小程序,成为榜单中的第二大品类。同时,新上榜的小程序行业分布更加均衡,主要集中在游戏、视频、工具、生活服务等11个领域,各个领域的新上榜数量差距缩小。小游戏的细分品类也更加丰富,覆盖了赛车、竞技、经营策略和动作冒险等品类。 触宝昨天公布了2019年一季度财报。财报显示,触宝一季度净收入为4004万美元,同比增长83%;全球日活1.69亿,同比增长40%。业务层面来看,触宝在内容生态和广告生态均有大动作,推出了自有广告平台,并将内容业务进行了打通整合,发挥协同效应,从而推动用户数量增长和全球业务扩张。 LG昨天正式在韩国开放了全球首款8K OLED电视的预定。LG为88Z9配备了88英寸的OLED面板,拥有8K超高清分辨率,像素数量是4K UHD的四倍。鉴于目前市面上暂缺8K 内容,且短期内资源不会太丰富,因此LG 配备了第二代Alpha 9 8K 处理器,能够在深度学习算法的帮助下,将图像提升到更高的分辨率。当然,强悍功能的背后,就是惊人高的售价。88Z9 8K OLED TV在韩国市场的零售价为5000万韩元,约合29.2万元人民币。

36氪·8点1氪
【早报】WWDC回顾:苹果第五大系统诞生;最亲民的KAWS来了,三秒售罄

36氪·8点1氪

Play Episode Listen Later Jun 4, 2019 4:46


今天凌晨,一年一度的WWDC 2019发布会举行。本次发布会上,苹果原有的四大系统Mac OS、iOS、TV OS、Watch OS都迎来了更新。不仅如此,苹果还给iPad带来了独立的系统,苹果第五大系统自此诞生。这使iPad摆脱了之前iOS系统的限制,独立性更强。根据现场演示,装载了iPad OS系统的iPad在文本和图片处理上的功能十分突出。iPad OS支持同屏多任务功能,使文本的分屏操作成为可能。另外,今年的WWDC还发布了新款的Mac Pro。新款Mac Pro搭载最高28核英特尔Xeon CPU,1.5TB系统内存,很多3D建模软件将可在Mac Pro上运行。这块号称史上最强的Mac Pro起售价5999美元。苹果还发布了新版Apple Pro XDR显示器,大小为32英寸,分辨率达到6K,起售价4999美元。两者都将在今年秋天上市。 除了接入支付宝芝麻信用平台,“弹个车”的汽车租赁又上了一层保险。近日,大搜车与中国银行 浙江省分行签署战略合作协议。根据协议,中国银行将为大搜车旗下融资租赁服务品牌“弹个车”提供综合授信,建立汽车融资租赁风控体系。用户可在线上查询征信,60秒内即可获知审批结果,同时,银行智能化风险决策引擎,实现自动化的全程风险监测。 优衣库KAWS联名款昨天在天猫618上架3秒即告售罄,而这款商品也带动优衣库品牌搜索量在天猫增长37倍。此次,KAWS和优衣库合作的“KAWS:SUMMER”系列产品包括12款成人T恤、6款童装和4款帆布包,这将是KAWS最后一次与优衣库的合作系列。Kaws是一位美国街头艺术家,代表作品主要有《回家路漫漫》。2006 年创办街头潮流品牌Original Fake。 此前有消息称趣头条总编辑肖厚君已于上周离职,对此,趣头条方面表示:趣头条总编辑肖厚君因个人原因已于上周离职,感谢他在职期间为趣头条所作出的贡献;趣头条已提前进行相关调整,肖厚君的离职对现有业务没有任何影响;趣头条总编辑已有人选。 阿拉丁小程序数据统计平台发布了2019年5月微信小程度报告,生活服务类小程序上榜率首次超过了游戏类小程序,成为榜单中的第二大品类。同时,新上榜的小程序行业分布更加均衡,主要集中在游戏、视频、工具、生活服务等11个领域,各个领域的新上榜数量差距缩小。小游戏的细分品类也更加丰富,覆盖了赛车、竞技、经营策略和动作冒险等品类。 触宝昨天公布了2019年一季度财报。财报显示,触宝一季度净收入为4004万美元,同比增长83%;全球日活1.69亿,同比增长40%。业务层面来看,触宝在内容生态和广告生态均有大动作,推出了自有广告平台,并将内容业务进行了打通整合,发挥协同效应,从而推动用户数量增长和全球业务扩张。 LG昨天正式在韩国开放了全球首款8K OLED电视的预定。LG为88Z9配备了88英寸的OLED面板,拥有8K超高清分辨率,像素数量是4K UHD的四倍。鉴于目前市面上暂缺8K 内容,且短期内资源不会太丰富,因此LG 配备了第二代Alpha 9 8K 处理器,能够在深度学习算法的帮助下,将图像提升到更高的分辨率。当然,强悍功能的背后,就是惊人高的售价。88Z9 8K OLED TV在韩国市场的零售价为5000万韩元,约合29.2万元人民币。

Production Expert Podcast
372 - Is The New Mac Pro All We Asked For Except For The Price? - Production Expert Podcast

Production Expert Podcast

Play Episode Listen Later Jun 4, 2019 46:43


In the Production Expert podcast Mike, Dan and Eli talk about the new Modular Cheese-grater Mark 2 Mac Pro, with 8 PCI-e card slots, up to 28 cores, 1.5TB of RAM and share their first reactions as well as the new Pro Display and macOS 10.15 - Catalina. In contrast with the new Mac Pro they discuss the free DAW - Cakewalk that already has ARA2 support as well as share their favourite free plug-ins and software. They also share their finds of the week.

Píldoras para ZOHO CRM
133# ZOHO WorkDrive la nueva herramienta para la gestión de archivos en la empresa

Píldoras para ZOHO CRM

Play Episode Listen Later Jan 28, 2019 10:20


En este episodio voy a realizar una rápida introducción a la herramienta ZOHO WorkDrive, en esta ocasión el nombre transmite bastante de su función principal :-) El motivo de hacerlo ha sido para dar respuesta a un alumno de la academia que me preguntaba sobre un problema de almacenamiento en ZOHO CRM. ZOHO WorkDrive como complemento a ZOHO CRM Este episodio ha surgido gracias a la pregunta de un alumno. Este alumno tiene un problema en el que tal vez tú te sientas reflejado. El utiliza ZOHO CRM y la función de adjuntar documentos a cada registro (leads, contactos, etc.) le es muy útil, pero los ficheros que adjunta tienen bastante peso y ha consumido el que ZOHO CRM le ofrece gratuitamente y después de hacer cálculos comprar almacenamiento extra en ZOHO CRM le supone mucho gasto. Me preguntó si podía almacenar esta información en otros servicios, pero que no perdiera la comodidad que le ofrece ZOHO CRM de asociarlo al registro como decía. La respuesta es que si bien hay algunos servicios, por ejemplo es posible asociar ZOHO CRM con Google Drive, sigue teniendo el problema de que sigue consumiendo espacio en ZOHO CRM ya que lo que hacen es acceder a ese servicio (GDrive) y copiar el archivo en ZOHO CRM, por lo tanto no le valía. De hecho, hasta que apareció ZOHO WorkDrive, yo no conocía ninguno que le solventara el problema, pero esto ha cambiado con ZOHO WorkDrive pues gracias a su última actualización ahora se conecta con ZOHO CRM y puede crear asociaciones de archivos con registros de ZOHO CRM sin consumir espacio en ZOHO CRM. Además ZOHO WorkDrive te ofrece 5Tb en su versión Bussines, lo cual es francamente mucho espacio. ¿Quieres saber más sobre ZOHO WorkDrive? Pues ya sabes escucha este episodio y si tienes dudas, solo escribe un comentario. :-)

Píldoras para ZOHO CRM
133# ZOHO WorkDrive la nueva herramienta para la gestión de archivos en la empresa

Píldoras para ZOHO CRM

Play Episode Listen Later Jan 27, 2019 10:20


En este episodio voy a realizar una rápida introducción a la herramienta ZOHO WorkDrive, en esta ocasión el nombre transmite bastante de su función principal :-) El motivo de hacerlo ha sido para dar respuesta a un alumno de la academia que me preguntaba sobre un problema de almacenamiento en ZOHO CRM. ZOHO WorkDrive como complemento a ZOHO CRM Este episodio ha surgido gracias a la pregunta de un alumno. Este alumno tiene un problema en el que tal vez tú te sientas reflejado. El utiliza ZOHO CRM y la función de adjuntar documentos a cada registro (leads, contactos, etc.) le es muy útil, pero los ficheros que adjunta tienen bastante peso y ha consumido el que ZOHO CRM le ofrece gratuitamente y después de hacer cálculos comprar almacenamiento extra en ZOHO CRM le supone mucho gasto. Me preguntó si podía almacenar esta información en otros servicios, pero que no perdiera la comodidad que le ofrece ZOHO CRM de asociarlo al registro como decía. La respuesta es que si bien hay algunos servicios, por ejemplo es posible asociar ZOHO CRM con Google Drive, sigue teniendo el problema de que sigue consumiendo espacio en ZOHO CRM ya que lo que hacen es acceder a ese servicio (GDrive) y copiar el archivo en ZOHO CRM, por lo tanto no le valía. De hecho, hasta que apareció ZOHO WorkDrive, yo no conocía ninguno que le solventara el problema, pero esto ha cambiado con ZOHO WorkDrive pues gracias a su última actualización ahora se conecta con ZOHO CRM y puede crear asociaciones de archivos con registros de ZOHO CRM sin consumir espacio en ZOHO CRM. Además ZOHO WorkDrive te ofrece 5Tb en su versión Bussines, lo cual es francamente mucho espacio. ¿Quieres saber más sobre ZOHO WorkDrive? Pues ya sabes escucha este episodio y si tienes dudas, solo escribe un comentario. :-)

The CultCast
CultCast #360 - Our More in the Making hardware reactions!

The CultCast

Play Episode Listen Later Nov 2, 2018 68:57


This week: Apple’s More in the Making hardware keynote was an action-packed ride stuffed with one hardware update after another!  We’ll tell you what we like (and what we don’t) about Apple’s newest product updates.  Plus: Apple quietly offers a big screw you to 2018 MacBook Pro owners… prepare for a rant.   This episode supported by   Easily create a beautiful website all by yourself, at Squarespace.com/cultcast. Use offer code CultCast at checkout to get 10% off your first purchase of a website or domain.   Whether it’s planning a date night or scheduling a business trip, Fin’s army of virtual assistants can do the tasks you don’t have time to do. Try itself for free at fin.com/cultcast.   CultCloth will keep your iPhone X, Apple Watch, Mac and iPad sparkling clean, and for a limited time use code CULTCAST at checkout to score a free CleanCloth with any order at CultCloth.co.   Thanks to Kevin McLeodfor the music you hear on today’s episode.   On the show this week @erfon / @lkahney / @lewiswallace   Everything you need to know about Apple’s Q4 2018 earnings call https://www.cultofmac.com/587580/everything-you-need-to-know-about-apples-q4-2018-earnings-call/   USB-C for iPad Pro: Everything you need to know https://www.cultofmac.com/586972/ipad-pro-usb-c/   All the ways the 2018 iPad Pro blows away its predecessors https://www.cultofmac.com/586822/2018-ipad-pro-comparison-2017/   iPad Pro Super sexy new design. Already sold out for 1-2 weeks! Thinner, and even smaller.  12.9” now the size of a sheet of paper. No headphone jack... Big performance increases, big price increases. Start price is now about 20% higher. 11-inch models start at $799 (Wi-Fi) and $949 (Wi-Fi + Cellular), while 12.9-inch models start at $999 iPad Pro comes with Apple's next-generation Neural Engine for advanced machine learning  This is the first gadget besides iPhone that will include FaceID.   Powered by the new A12X Bionic chip A12X Bionic has eight cores (four performance cores and four efficiency cores) and per Apple provide up to 35 percent faster single-core performance.  Seven-core GPU to deliver up to twice the graphics performance.  They say it’s now as powerful as an Xbox One S console. USB-C connector is replacing the Lightning connector, Allows iPad to charge your iphone What can connect to the iPad?  USB Hubs?  Hard Drives?  Printers?  Even Apple said they weren’t sure when The Verge asked them. Connect to external displays up to 5K… but why? Still no file system. No mouse support. Apple is trying to position this as a laptop, but it’s not a laptop. Apple Pencil 2 gets magnetic connection and charging, better iOS integration, and a 30% higher price tag (now $130) BTW: Previous 10.5” iPad is still available, and still for the same 64GB base price of $649.  256 and 512 models are also still available.       Macbook Air - a confusing product Newest gen Amberlake 8th gen dual core CPU Complete overhaul Retina Display New 3rd get butterly keyboard TouchID Larger Force Touch track pad Made from 100% garbage Also gets a 20% increase in price for the base model with 128GB, now $1200. Previous model is still available for the same $1000 price tag. Maxed out model with 16GB of RAM and 1.5TB - $2600 MacBook is now more expensive than the Air, starting at $1,299 and it has a slower processor, a smaller screen and it has fewer upgrade options. MBA now just $100 than the 13” MacBook Pro with faster processor and Intel Iris graphics.   Mac Mini Full overhaul Space Gray finish HDMI 2.0 two USB 3 ports four Thunderbolt 3 ports a headphone jack  and — for an extra $100 — a 10-gigabit ethernet port. With the Thunderbolt 3 ports A new cooling system doubles the airflow, allowing the machine to run at a maximum sustained power that is 70 percent higher than before According to Geekbench 4 scores, the base 4-core CPU has close to the same performance of the base 2017 5K iMac. $799 base model comes with 128GB of SSD, a  quad core 8th gen coffeecake i3 CPU with no turbo boost or hyper threading, and 8GB of ram. The maxed out with 64GB of RAM, 2TB SSD, and 6-core i7 CPU is a mere $4200. But the huge omission here is the Intel UHD Graphics 630.  There is no option to upgrade.     MacBook Pro    New MacBook Pros with updated graphics are coming next month https://www.cultofmac.com/586666/new-macbook-pro-graphics/ Apple has screwed us, their loyal pro users. Just about 3 months ago, on July 12, 2018, Apple introduced refreshed MacBook Pro models aimed at professional users, with the long awaited 6-core Coffee lake processors, up to 32GB of RAM, and slightly better GPUs.  Apple positioned these as video editing dream machines. It had been years since a compeling MBP update, so people were excited and scooped them up when they went on sale. But the graphics cards on these new MBPs were only minor spec bumps. The 560X was just a spec bumped 560, which was itself only a spec bump’d 460. Fast forward to this week, Apple has announced they will be offering a massive upgrade option to the Radeon Pro Vega GPUs, the same ones offered in the iMac Pro.  These new graphics options offer up to 60 perfect faster graphics performance in video editing, 3D design, and rendering workloads…  you know, the exact things people are BUYING THIS FCKING MBP FOR. Clearly, Apple knew they were going to offer this upgrade, but they didn’t tell us.   Why?  Because they knew many of us might wait to buy, and they wanted our money.   5 things we didn’t get at Apple’s ‘More in the Making’ event https://www.cultofmac.com/586726/apple-october-press-event-airpower-ipad-mini-airpods/

PetaPixel Photography Podcast
Ep. 125: Is It Time to Abandon Nikon? - and more

PetaPixel Photography Podcast

Play Episode Listen Later Nov 18, 2016 24:56


Episode 125 of the PetaPixel Photography Podcast. Download MP3 -  Subscribe via iTunes, Google Play or RSS! Featured: Chris Knight, Portrait, fashion and advertising photographer In This Episode If you subscribe to the PetaPixel Photography Podcast in iTunes, please take a moment to rate and review us and help us move up in the rankings so others interested in photography may find us. Sponsor:  MeFoto. Save 15% off the MeFOTO product of your choice at MeFOTO.com with the code PetaPixel. Portrait, fashion and advertising photographer Chris Knight opens the show. Thanks Chris! Listener Jay in Maryland wants to know if Nikon is a sinking ship he should get off of now, or hold tight. Insight into Nikon's numbers, their place in this industry and talk of the future. (#) Leica pleases dentists and collectors with a red version of its 50mm f/2 and of course it's extremely pricey. (#) Seagate announces a durable, portable 5TB hard drive to store your data on the go. What's your data backup plan? You do have a plan right? (#) DJI announces two impressive new drones amid GoPro's recall of the Karma. (#) Outtakes Connect With Us Thank you for listening to the PetaPixel Photography Podcast! Connect with me, Sharky James on Twitter, Instagram and Facebook (all @LensShark) as we build this community. We’d love to answer your question on the show. Leave us an audio question through our voicemail widget, comment below or via social media. But audio questions are awesome! You can also cut a show opener for us to play on the show! As an example: “Hi, this is Matt Smith with Double Heart Photography in Chicago, Illinois, and you’re listening to the PetaPixel Photography Podcast with Sharky James!”

Digital Coffee
Twitter hates ideas and BLU is spying on you

Digital Coffee

Play Episode Listen Later Nov 16, 2016 48:02


Today's EpisodeTwitter hates ideas. This is a problem to me. I think the best of Twitter was they allowed people to express their ideas. It didn't matter if you were right, left, center, or didn't care. The problem is Twitter wants groupthink. It thinks that one political ideology is right. I think that if a business uses a free speech in their policies, they do thrive. However, private businesses do not have to follow a free speech model. Twitter needs to follow its own policy and only ban those that do terrible things. Disagreeing is not one.Tech News Decoded:Twitter is adding QR codes because reasonsAndriod TV finally gets Twitter Live TV appTwitter is banning alit-right people and it’s stupidMicrosoft is all in with the Linux FoundationGoogle Earth VR looks coolThere’s a $5 device that can crack your passwordWhatsApp is getting video callingBlu phones are spying on you for China.Snapchat is going public… soonBroadsoft’s Team-One is a Slack CompetitorKaspersky is making a program to leave social mediaGoogle Docs, Sheets, and Spreadsheets let users make their own templatesFCC cancels any of their new policiesGoogle Translate is getting smarterSeagate has a 5TB portable hard driveOnePlus says goodbye to 3 and hello to 3TMy first impressions of the Pixel Phone.Apps/Programs To Try:Otto RadioA.I. ExperimentsDropTweetable Quotes:Twitter is throwing ideas out there and seeing which ones sticks…Twitter you need to stop banning people because they don't agree with youPixel Phone is great so far.Credit:outro music by http://www.bensound.com/royalty-free-music/track/sci-Support:Like these podcasts? Support me on Podbean and Patreon! See acast.com/privacy for privacy and opt-out information.

Infinitum
Ogi, još te volimo!

Infinitum

Play Episode Listen Later Sep 8, 2016 101:00


Follow-Up Za flopi drajv na Meku: samo su oni od 400K i 800K imali promenljivu brzinu, onaj od 1.44 nije / ep.XX Za kartice na iCloudu: Mikijev Mastercard Erste banke ipak radi, obratite pažnju kod unosa do kada važi kartica, format je mm/yyyy, a ne mm/yy Luka Đokić poslao par informacija, prvo za Backblaze backup: Backblaze proradio u Srbiji Samo da javim sjajnu vest, Backblaze (pretpostavljam od nedavno) može da se koristi u Srbiji. Sada dopušta pravljenje naloga, probni perioda od 15 dana kao i pristup stranici za naplatu. Nas nema na spisku zemalja ali tu su Bosna i Hrvatska. Lično isprobao, uveliko mi backupuje 1.5TB bez problema! A zatim i za promenljivi Artwork u Overcastu: Nedavno sam primetio da neki podkasti koji imaju podršku za poglavlja (chapters) koriste na neku foru različite artworke po poglavljima. Na priloženim screenshotovima sam kao primer uzeo jednu od epizoda ATPa koja u prva tri poglavlja ima tri različita artworka. Nisam isprobao ostale playere ali pretpostavljam da bi trebalo da postoji neka fora (osim preko poglavlja) da i vama stoje Montiljovi radovi. Verujem da ovo nije od velike pomoći ali nadam se da je korak bliže cilju! Security “Ma samo vi to napravite momci, a mi ćemo da čuvamo” reče FBI Appleu. Ah, ha… Government Hackers Caught Using Unprecedented iPhone Spy Tool Snowden tvitovao da to nije slučajnost već da NSA često ostavlja servere sa rupama za slučaj da njima zatreba i onda se to kad-tad obije o glavu Microsoft slučajno pustio u divljinu Zlatni Ključ za Windows Secure Boot Prezentacija Ivana Krstića sa Black Hata je objavljena, što video što slajdovi. Ima i po koji - prilično pozitivan - komentar. Inače je nedavno bila aktuelna i PEGASUS rupa u iOS-u i OS X-u koju je Apple zakrpio ali njihov pristup tim problemima i dalje nije idealan. Par blog postova o tome: PEGASUS iOS Kernel Vulnerability Explained prvi deo drugi deo Koga zanima ova tema, vrediti pratiti Stefana Essera Vesti Posle 100 godina samo…ovaj…čekanja - Google ažurirao Docs da podržava iPad Multitasking An Exclusive Look at How AI and Machine Learning Work at Apple – Steven Levy na Backchannelu Tema: Apple Event, Sep 2016 Očekivanja, sumarno Ming Chi Kuo,sumarno Live stream iOS 10 i watchOS 3 (verovatno i tvOS 10) izlaze 13. septembra. macOS Sierra 20. septembra. Novi proizvodi: WATCH Series 1 (stari Watch sa novim S2 čipom) WATCH Series 2 Ceramic zamenjuje Edition kao najskuplji materijal, ali daleko ispod 10k iPhone 7 iPhone 7 Plus Čudesna kamera 4 core, performanse otkidaju Pet boja, Jet Black kao primarni marketinški model Nema audio porta, audio je sada Lightning ili Wireless Zato ima stereo speakers AirPods - nove wireless sluške sa W1 čipom Glasine iz staklene kugle 10.5-inch iPad Pro coming in 2017, OLED upgrade in 2018 Zahvalnice Snimljeno 07.09.2016. Uvodna muzika by Vladimir Tošić, stari sajt je ovde. Logotip by Aleksandra Ilić. Artwork epizode Sleepwalker (2010) by Saša Montiljo, njegov kutak na Devianartu.

Stone & Nutz
#091: Jared Fogle Has Over 5 Terabytes of Child Porn?! | Psychedelic Ghosts | Our Universe of Wonder

Stone & Nutz

Play Episode Listen Later Nov 12, 2015 33:50


It is getting way too much. Jared Fogle is back in the news. It is rather silly listening to how he had more Child Porn, than Tommy Stone has regular porn! Porn is fun, but when it comes to children, you are an asshole. Take those 5TB's of dirty kiddie porn and burn it in hell. With Jared himself. Before we get too pissed off, lets calm down and have a talk about the good ol' psychedelics and how they relate to our brains. Is seeing a ghost a psychedelic experience? Are ghosts even real? The Universe is wonderful!

BSD Now
66: Conference Connoisseur

BSD Now

Play Episode Listen Later Dec 3, 2014 82:32


This week on the show, we'll be talking with Paul Schenkeveld, chairman of the EuroBSDCon foundation. He tells us about his experiences running BSD conferences and how regular users can get involved too. We've also got answers to all your emails and the latest news, coming up on BSD Now - the place to B.. SD. This episode was brought to you by Headlines More BSD presentation videos (https://www.meetbsd.com/) The MeetBSD video uploading spree continues with a few more talks, maybe this'll be the last batch Corey Vixie, Web Apps in Embedded BSD (https://www.youtube.com/watch?v=Pbks12Mqpp8) Allan Jude, UCL config (https://www.youtube.com/watch?v=TjP86iWsEzQ) Kip Macy, iflib (https://www.youtube.com/watch?v=P4FRPKj7F80) While we're on the topic of conferences, AsiaBSDCon's CFP was extended (https://twitter.com/asiabsdcon/status/538352055245492226) by one week This year's ruBSD (https://events.yandex.ru/events/yagosti/rubsd14/) will be on December 13th in Moscow Also, the BSDCan call for papers (http://lists.bsdcan.org/pipermail/bsdcan-announce/2014-December/000135.html) is out, and the event will be in June next year Lastly, according to Rick Miller, "A potential vBSDcon 2015 event is being explored though a decision has yet to be made." *** BSD-powered digital library in Africa (http://peercorpsglobal.org/nzegas-digital-library-becomes-a-reality/) You probably haven't heard much about Nzega, Tanzania, but it's an East African country without much internet access With physical schoolbooks being a rarity there, a few companies helped out to bring some BSD-powered reading material to a local school They now have a pair of FreeNAS Minis at the center of their local network, with over 80,000 books and accompanying video content stored on them (~5TB of data currently) The school's workstations also got wiped and reloaded with FreeBSD, and everyone there seems to really enjoy using it *** pfSense 2.2 status update (https://blog.pfsense.org/?p=1486) With lots of people asking when the 2.2 release will be done, some pfSense developers decided to provide a status update 2.2 will have a lot of changes: being based on FreeBSD 10.1, Unbound instead of BIND, updating PHP to something recent, including the new(ish) IPSEC stack updates, etc All these things have taken more time than previously expected The post also has some interesting graphs showing the ratio of opened and close bugs for the upcoming release *** Recommended hardware threads (https://www.reddit.com/r/BSD/comments/2n8wrg/bsd_on_mini_itx/) A few threads on caught our attention this week, all about hardware recommendations for BSD setups In the first one, the OP asks about mini-ITX hardware to run a FreeBSD server and NAS Everyone gave some good recommendations for low power, Atom-based systems The second thread (https://www.marc.info/?t=141694918800006&r=1&w=2) started off asking about which CPU architecture is best for PF on an OpenBSD router, but ended up being another hardware thread For a router, the ALIX, APU and Soekris boards still seem to be the most popular choices, with the third (https://www.reddit.com/r/homelab/comments/24m6tj/) and fourth (https://www.reddit.com/r/PFSENSE/comments/2nblgp/) threads confirming this If you're thinking about building your first BSD box - server, router, NAS, whatever - these might be some good links to read *** Interview - Paul Schenkeveld - freebsd@psconsult.nl (mailto:freebsd@psconsult.nl) Running a BSD conference News Roundup From Linux to FreeBSD - for reals (https://www.reddit.com/r/freebsd/comments/2nqa60/) Another Linux user is ready to switch to BSD, and takes to Reddit for some community encouragement (seems to be a common thing now) After being a Linux guy for 20(!) years, he's ready to switch his systems over, and is looking for some helpful guides to transition In the comments, a lot of new switchers offer some advice and reading material If any of the listeners have some things that were helpful along your switching journey, maybe send 'em this guy's way *** Running FreeBSD as a Xen Dom0 (http://wiki.xenproject.org/wiki/FreeBSD_Dom0) Continuing progress has been made to allow FreeBSD to be a host for the Xen hypervisor This wiki article explains how to run the Xen branch of FreeBSD and host virtual machines on it Xen on FreeBSD currently supports PV guests (modified kernels) and HVM (unmodified kernels, uses hardware virtualization features) The wiki provides instructions for running Debian (PV) and FreeBSD (HVM), and discusses the features that are not finished yet *** HardenedBSD updates and changes (http://hardenedbsd.org/article/shawn-webb/2014-11-18/aout-and-null-mapping-support-removal) a.out is the old executable format for Unix The name stands for assembler output, and was coined by Ken Thompson as the fixed name for output of his PDP-7 assembler in 1968 FreeBSD, on which HardenedBSD is based, switched away from a.out in version 3.0 A restriction against NULL mapping was introduced in FreeBSD 7 (https://www.freebsd.org/security/advisories/FreeBSD-EN-09:05.null.asc) and enabled by default in FreeBSD 8 However, for reasons of compatibility, it could be switched off, allowing buggy applications to continue to run, at the risk of allowing a kernel bug to be exploited HardenedBSD has removed the sysctl, making it impossible to run in ‘insecure mode' Package building update: more consistent repo, no more i386 packages (http://hardenedbsd.org/article/shawn-webb/2014-11-30/package-building-infrastructure-maintenance) *** Feedback/Questions Boris writes in (http://slexy.org/view/s2kVPKICqj) Alex writes in (http://slexy.org/view/s21Fic4dZC) (edit: adding "tinker panic 0" to the ntp.conf will disable the sanity check) Chris writes in (http://slexy.org/view/s2zk1Tvfe9) Robert writes in (http://slexy.org/view/s22alvJ4mu) Jake writes in (http://slexy.org/view/s203YMc2zL) *** Mailing List Gold Real world authpf use (https://www.marc.info/?t=141711266800001&r=1&w=2) The (https://svnweb.freebsd.org/ports/head/UPDATING?r1=373564&r2=373563&pathrev=373564) great (https://lists.freebsd.org/pipermail/freebsd-ports/2014-November/096788.html) perl (https://lists.freebsd.org/pipermail/freebsd-ports/2014-November/096799.html) event (https://lists.freebsd.org/pipermail/freebsd-perl/2014-November/010146.html) of (https://lists.freebsd.org/pipermail/freebsd-perl/2014-November/010149.html) 2014 (https://lists.freebsd.org/pipermail/freebsd-perl/2014-November/010167.html) ***

RunAs Radio
Building New SQL Server Hardware with Brent Ozar

RunAs Radio

Play Episode Listen Later Oct 1, 2014 35:11


Richard chats with Brent Ozar about the amazing new hardware coming out for SQL Server. Yeah, it's time to geek out on hardware. Brent discusses some of the amazing small form factor machines coming from Dell and Hewlett-Packard, including the Dell PowerEdge 13G R730xd. What's new there? 24 RAM slots permits up to 1.5TB of RAM! And that's not all, Brent talks about the power of having three stage storage - room of three PCIe based SSD storage, 18 1.8" SATA SSD slots and 8 3.5" hard drive bays. Not only is that a lot of storage, it also provides the flexibility to let SQL Server structure your data into ultra-fast storage, super-fast storage and plain old fast storage. While the cloud is only offering scale out solutions, the latest hardware shows that you can still scale up!

FrequencyCast UK Tech Radio Show
FrequencyCast UK Show 100: Centenary Show, Samsung Media Drive, Dartford Crossing and Minty Biscuits

FrequencyCast UK Tech Radio Show

Play Episode Listen Later Jun 1, 2014 30:03


One hundred FrequencyCast shows.... time to look back at what the world was like when we started. Pete and Kelly look back at eight years of tech and play a few fun extracts. We also have an interview with Samsung about their new Wireless 1.5TB media drive, and we look at changes to the Dartford Crossing. In the news, the latest on the eBay data breach, new TV channels, and minty chocolatey biscuits. Links and transcripts at https://www.frequencycast.co.uk/cast100.html

DEKOMPRESOR /TECHNOLOGIA
BIG DATA - Tsunami danych

DEKOMPRESOR /TECHNOLOGIA

Play Episode Listen Later Mar 25, 2013 84:02


Przez najbliższe 5 lat, ilość danych zgromadzonych w Internecie wzrośnie o 800%. Szacuje się również, że na głowę każdego mieszkańca Ziemi przypada około 5TB danych, gromadzonych w Internecie oraz rozmaitych systemach informatycznych. Czy Internet czeka swoiste tsunami danych, które zaleje wszystko i wszystkich? Oraz czy są sposoby by sobie z tym radzić - o tym w tym odcinku.

Zprávy Živě
Zprávy Živě v MP3

Zprávy Živě

Play Episode Listen Later Dec 7, 2012


Microsoft má vylepšenou síť So.cl, dokončuje se snímek o Jobsovi, Nokia prodala svůj finský komplex, Google vyhlásil Zeitgeist, v Dubaji se koná WCIT, WD plánuje 5TB úsporné disky.

T&FのITニッチとーく
ITニッチとーく第4回

T&FのITニッチとーく

Play Episode Listen Later Oct 16, 2008


ITニッチとーく第4回子供と携帯の問題を解決しさらに学力も上げてしまう斬新なアイデア、Mac用の無料ウィルス対策ソフト iAntiVirus の紹介、フェリカによるポイントを統合するサービス、1.5TBのハードディスクが登場と外付けハードディスク自作のすすめ、Windows7の正式名称について、無線LANのWEP解読ソフト登場などの話題とそれに関するまたは関しないとーくをお届けします。

TechByter Worldwide (formerly Technology Corner) with Bill Blinn
TechByter Worldwide 2008.07.13: Miscellaneous Musing of a Misdirected Mind and Nerdly News

TechByter Worldwide (formerly Technology Corner) with Bill Blinn

Play Episode Listen Later Jul 12, 2008 19:46


Covering the waterfront: From open source software to Model T automobiles on the freeway to lions in my house. In Nerdly News, Linux on Wall Street and 1.5TB on your desktop.