Podcasts about tianqi

  • 26PODCASTS
  • 45EPISODES
  • 33mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about tianqi

Latest podcast episodes about tianqi

Business News - WA
At Close of Business Podcast January 24 2025

Business News - WA

Play Episode Listen Later Jan 24, 2025 9:28


Justin Fris and Sam Jones discuss efforts to revamp and grow the UWA Waterpolo Club. Plus: Serious misconduct alleged at PGA; IGO, Tianqi down tools at troubled Kwinana plant; and Albanese's $626 million tradie bonus

The Thesis Review
[48] Tianqi Chen - Scalable and Intelligent Learning Systems

The Thesis Review

Play Episode Listen Later Oct 28, 2024 46:29


Tianqi Chen is an Assistant Professor in the Machine Learning Department and Computer Science Department at Carnegie Mellon University and the Chief Technologist of OctoML. His research focuses on the intersection of machine learning and systems. Tianqi's PhD thesis is titled "Scalable and Intelligent Learning Systems," which he completed in 2019 at the University of Washington. We discuss his influential work on machine learning systems, starting with the development of XGBoost,an optimized distributed gradient boosting library that has had an enormous impact in the field. We also cover his contributions to deep learning frameworks like MXNet and machine learning compilation with TVM, and connect these to modern generative AI. - Episode notes: www.wellecks.com/thesisreview/episode48.html - Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter - Follow Tianqi Chen on Twitter (@tqchenml) - Support The Thesis Review at www.patreon.com/thesisreview or www.buymeacoffee.com/thesisreview

Chronique des Matières Premières
Les prix trop bas du lithium entraînent des reports d'investissements

Chronique des Matières Premières

Play Episode Listen Later Sep 5, 2024 1:47


Le lithium fait chuter les bénéfices des compagnies minières. En raison d'une offre excédentaire, plusieurs mineurs chinois affichent des pertes au premier semestre de l'année 2024 et annoncent repenser leur stratégie d'investissement. Avec des prix qui ont chuté à leur plus bas niveau depuis trois ans, les bénéfices que certains avaient encore réussi à enregistrer l'année dernière n'ont pas été réalisables en 2024.Les bilans du premier semestre communiqués ces derniers jours par deux miniers chinois confirment la crise qui touche la filière. Ganfeng le cinquième producteur mondial en valeur, annonce une perte de 107 millions de dollars. Tianqi, un autre groupe chinois, enregistre sa première perte semestrielle depuis 2020 – soit 734 millions de dollars – selon Bloomberg.« Il n'y a pas d'exception, résume Michel Jebrak, spécialiste en ressources minérales et co-auteur du livre Objectif lithium (Éditions MultiMondes). Tous les grands producteurs de lithium ont perdu 50% de leur valeur. » Y compris Albermarle, le numéro 1 américain, qui a réduit depuis plusieurs mois déjà de manière drastique ses coûts pour essayer de compenser la chute des prix.Plans d'expansion remis en causeSans surprise, Ganfeng repense lui aussi désormais ses investissements, en particulier ceux des projets qui ne génèrent pas de rendements « significatifs » à court terme.L'avenir n'a jamais été incertain pour ces jeunes sociétés qui se rêvaient en « caïds » du lithium dans les prochaines années. « La baisse de leurs valeurs boursières les rend plus accessibles aux majors », explique Michel Jebrak, en particulier à celles qui sont sorties du charbon et qui vont devoir se positionner sur les métaux d'avenir, indispensables à la transition électrique. « Il n'est donc pas exclu d'assister à une période de négociation en vue de rachats ou de fusion entre sociétés » ajoute l'expert. Des perspectives suspendues à  l'évolution du parc automobileSeule la reprise économique de la Chine et l'électrification du parc automobile européen et américain permettront aux miniers de remonter la pente. Or, l'essor de l'électrique est lui-même tributaire de politiques gouvernementales, et donc par essence difficile à anticiper.Dans une de leurs dernières notes, les analystes d'UBS prédisent un marché excédentaire encore jusqu'en 2027, et abaissent de 10% leurs perspectives de demande en lithium d'ici 2030, le report annoncé de plusieurs projets n'étant pas jugé suffisant à ce stade pour compenser la faiblesse des besoins.À lire aussiExploiter le lithium, une fausse bonne idée?

Marcus Today Market Updates
Pre-Market Report – Thursday 5th September: US markets steady | SPI up 2

Marcus Today Market Updates

Play Episode Listen Later Sep 4, 2024 10:40


US equities closed mixed following a choppy trading session as treasury yields slumped on jobs data and Nivida extended its two-day sell-off to 11%. The Dow edged higher, up 38 points (+0.09%). Up 236 points at best. Down 96 points at worst. S&P 500 and NASDAQ closed lower, down 0.16% and 0.30%, respectively dragged down by losses in Energy and Tech stocks. Russell 2000 fell 0.16% and the VIX rose 2.90%. Philadelphia SE Semiconductor index rebounded from its biggest one-day drop since COVID-19, gaining 0.25%. US JOLTS report showed job openings fell to a 3.5-year low in July, signalling an easing labour market, which may further strengthen a greater rate cut by the Fed later this month.Economic growth slowed across the country while prices rose “modestly” and employers grew more reluctant to hire and choosier about who they selected for job openings, the Federal Reserve reported in the Beige Book.ASX to inch higher. SPI Futures up 2 points (+0.03%). COMMODITIESChile's top court rejects Tianqi appeal to halt SQM-Codelco deal, newspaper says.Gold rebounds from lows after weak US jobs openings data.Copper steadies after hitting three-week low on global growth worries.Crude futures settle down by more than $1/bbl on demand fears.OPEC+ reportedly discussing delaying planned oil output hike in October, according to sources.Citi says average price of oil could drop to $60 per barrel in 2025 due to weak demand and increased supply.Why not sign up for a free trial? Get access to expert market insights and manage your investments with confidence. Ready to invest in yourself? Join the Marcus Today community. 

Marcus Today Market Updates
Pre-Market Report – Monday 5th August - US Markets Fall Hard - SPI Down 115 - Risks Grow - NFP Stokes Recession Fears

Marcus Today Market Updates

Play Episode Listen Later Aug 4, 2024 13:38


The sell-off on Wall Street continued for a second consecutive session overnight. Weak jobs data fuelled worries that the Fed's decision to hold rates at two-decade highs is risking a hard landing/recession. Market fears have triggered a surge in volatility seeing the VIX jump to 23.39 (+25.8%), its highest since March 2023. The S&P 500 recorded its worst reaction to jobs data in almost two years, falling 1.84%. The NASDAQ plummeted 2.43% and confirmed it was in correction territory. The Dow traded lower all session, losing 611 points (-1.51%). Down 989 points at worst. The Russell 2000 slumped 3.52%, falling to a three-week low and saw its largest two-day fall since June 2022. Non-farm payrolls rose by 114k in July, significantly below expectations of 175k, marking one of the weakest prints since the pandemic. US Unemployment rate unexpectedly rose for a fourth month to 4.3%, a near three-year high. Treasury yields took a nosedive overnight after jobs numbers showed the US created fewer jobs than expected, supporting bets for more aggressive rate cuts by the Fed this year. 10Y yield dropped 18.8bps and the 2Y yield fell 28.5bps. USD Index fell to a four-month low, down 1.14%. Bets for a 50bps cut by the fed at a September meeting have jumped to 69%, according to CME FedWatch.ASX set to tumble another 115 points (-1.46%) according to SPI futures.COMMODITIESExxon delivers $9.2bn Q2 profit, raises output target.Chevron earnings slide, Hess merger arbitration drags on.Chilean court rejects Tianqi's request to pause SQM-Codelco deal.Copper firms on potential US rate cuts after jobs data.Oil settles at 8-month low after disappointing US job numbers.OPEC oil output rises in July on Saudi rebound, survey findsGold retreats on profit-taking and logs weekly gain.Why not sign up for a free trial? Get access to expert market insights and manage your investments with confidence. Ready to invest in yourself? Join the Marcus Today community.   

Caixin Global Podcasts
Caixin Biz Roundup: PwC China in Crisis

Caixin Global Podcasts

Play Episode Listen Later Jun 7, 2024 12:32


EU's proposed carbon footprint methodology has Chinese EV-battery makers on edge, Tianqi considers action to protect its stake in Chilean lithium mining firm. Subscribe to a bundle deal now to unlock all coverage by Caixin Global and The Wall Street Journal for only $200 a year. That's a 66% discount. Group access and applicable discounts are available. Contact us for a customized plan.

Daybreak en Español
Aplastante victoria de Sheinbaum; Entrevista con Andrés Pardo de XP

Daybreak en Español

Play Episode Listen Later Jun 3, 2024 6:49


Claudia Sheinbaum se impone con un margen de 30 puntos, y el peso mexicano se debilita; las acciones de GameStop suben antes de la apertura; Tianqi insiste en su lucha con SQM y Codelco; y Andrés Pardo, estratega para la región de XP Investimentos, nos comenta su visión sobre tasas de la Fed y activos colombianos.Para suscribirse al newsletter Cinco Cosas: https://www.bloomberg.com/account/newsletters/five-things-spanish?sref=IHf7eRWLMás de Bloomberg en Español:Youtube: https://www.youtube.com/BloombergEspanolWhatsApp: https://whatsapp.com/channel/0029VaFVFoWKAwEg9Fdhml1lTikTok: https://vm.tiktok.com/ZGeuw69Ao/X: https://twitter.com/BBGenEspanolProducción: Eduardo Thomson (@ethomson1) y Malu Poveda (@PovedaMalu)See omnystudio.com/listener for privacy information.

El Diario de Cooperativa AM
Diego Ibáñez denunció "nueva ideología de la derecha: el antigabrielismo"

El Diario de Cooperativa AM

Play Episode Listen Later May 28, 2024 32:54


El ingeniero Gustavo Lagos conversó con El Diario de Cooperativa sobre las positivas proyecciones para el precio del cobre. En la oportunidad el experto en minería se refirió a la alianza Codelco-SQM por litio, señalando que "creo que el acuerdo se va a producir, independiente de Tianqi y de Ponce Lerou"; Además conversamos con el diputado Diego Ibáñez (CS) criticó que actualmente el Congreso "se ha quedado paralizado" ante la tramitación de la reforma previsional porque, a su juicio, la derecha ha "inaugurado una nueva especie de ideología, que es antigabrielismo", ya que "todo lo que plantea el Presidente lo votan en contra, independiente de que eso beneficie a la ciudadanía". Conduce Verónica Franco y Rafael Pardo.

Mesa Central - RatPack
Los más de 40 días de paralización en Puerto Coronel y el cambio de presidencia en Tianqi Lithium

Mesa Central - RatPack

Play Episode Listen Later May 7, 2024 22:53


En una nueva edición del Rat Pack de Mesa Central, Iván Valenzuela conversó con las editoras Paula Comandari y Marily Lüders sobre la paralización del Puerto de Coronel y el cambio en la presidencia de la empresa china Tianqi Lithium.

Mesa Central - Columnistas
Mujica y Pérez por la misteriosa frase de Boric, los llamados a ejercer “presión social” y las polémicas en el mundo del litio

Mesa Central - Columnistas

Play Episode Listen Later Mar 31, 2024 38:52


En una nueva edición de Mesa Central Domingo, Iván Valenzuela conversó con los panelistas Mónica Pérez y Kike Mujica sobre la polémica por la frase “más Narbona, menos Craig”, los llamados del diputado Núñez y las diferencias entre SQM y Tianqi.

Radio Duna | Información Privilegiada
SQM vs. Tianqi, agenda económica y nivel del dólar

Radio Duna | Información Privilegiada

Play Episode Listen Later Mar 25, 2024


En la edición PM, hablamos con Arturo Curtze, analista senior de Alfredo Cruz y Cia, y con Marco Correa, economista jefe de BICE inversiones.

Radio Duna - Información Privilegiada
SQM vs. Tianqi, agenda económica y nivel del dólar

Radio Duna - Información Privilegiada

Play Episode Listen Later Mar 25, 2024


En la edición PM, hablamos con Arturo Curtze, analista senior de Alfredo Cruz y Cia, y con Marco Correa, economista jefe de BICE inversiones.

Business News - WA
At Close Of Business Podcast March 15

Business News - WA

Play Episode Listen Later Mar 15, 2024 10:56


Jack McGinn and Tom Zaunmayr discuss the politicisation of the cashless card welfare initiative. Plus: Rail delays continue; Tianqi's lithium calls, and Collie battery start.

X22 Report
[DS] War Games A Simulation Of A 2024 Coup After The Election, The Final Act – Ep. 3273

X22 Report

Play Episode Listen Later Feb 1, 2024 96:58


Watch The X22 Report On Video No videos found Click On Picture To See Larger PictureThe people of Europe are not giving up, they are now banning together and the EU is in trouble, they cannot stop what is happening. Layoffs are accelerating. The [CB] will push their [CBDC], but it will push people into alternative currencies. The [DS] is now projecting what they are going to do for the 2024 election. They used covid to cheat in the election and this time around they know they are going to lose. They have been war gaming a simulation of a coup after the 2024 election. This will be their final treasonous act. They are now creating the idea that patriots will revolt after the election when Trump loses. This is the opposite, they will not accept the results and they will use their people to push the event. Playbook known.   (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:13499335648425062,size:[0, 0],id:"ld-7164-1323"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.customads.co/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs"); Economy https://twitter.com/disclosetv/status/1753022609418010712?s=20 https://twitter.com/qtimekennedy/status/1753082407576555694?s=46   China's Lithium Producers See Profits Collapse as EV Demand Craters Chinese lithium giants Ganfeng and Tianqi warned investors  that their profits are plunging by up to 80 percent as the demand for electric vehicle (EV) batteries weakened, producing a glut of lithium that drove prices down. Ganfeng posted a note to the Hong Kong stock market that said net profits declined by up to 80 percent in 2023 because the “cyclical” nature of the lithium industry, and the “growth rate of demand slowing down,” caused a “significant decrease in the price of lithium-salt products.” EV sales slowed around the world in 2023, a trend slowed only by dramatic price cuts from expensive market-leading brands like Tesla and a push to adopt Tesla's charging port design as the industry standard, a move that would theoretically reduce consumer anxiety about being able to find compatible charging ports on long drives. source: breitbart.com Initial & Continuing Jobless Claims Surge As Layoffs Accelerate  The number of Americans filing for jobless claims for the first time in the last week was 224k (12k above expectations and 9k above the prior week). That is the highest since November... This is the biggest two-week jump in initial claims since the start of 2022 (and this is not seasonal, as this data is already seasonally-adjusted)... Source: Bloomberg California, New York, and Oregon saw the biggest increases in initial claims while Illinois and Missouri saw the biggest declines...   https://twitter.com/zerohedge/status/1752502655270666324?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1752502655270666324%7Ctwgr%5E96ef00d10da98a19361e29883e9c3e6b22998dd5%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.zerohedge.com%2Fmarkets%2Finitial-continuing-jobless-claims-surge-layoffs-accelerate   Source: zerohedge.com https://twitter.com/WallStreetSilv/status/1753046638631788975?s=20  stagflationary recession was ending and inflation contained. The specter of the elections will overhang any decision the Fed makes. Thus why number 2, the stagflationary solution seems to be the most palatable to the elites because government welfare programs can be used to offset the ability to afford basic necessities and that always plays well to the masses in an election year. The Globalists Want CBDCs In 2024... What Really Comes Next Will Surprise Them  The good news is that CBDCs are destined to fail. It's important to remember the wise words of Ron Paul: “What none of them (politicians) will admit is that the market is more powerful than the c...

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
LLMs Everywhere: Running 70B models in browsers and iPhones using MLC — with Tianqi Chen of CMU / OctoML

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 10, 2023 52:10


We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe

The Global Lithium Podcast
Episode 162: Ron Mitchell

The Global Lithium Podcast

Play Episode Listen Later Jun 3, 2023 70:41


Ron Mitchell is the Managing Director of Global Lithium Resources (ASX GL1) and a veteran of the lithium industry with more than a decade of prior experience at Talison and Tianqi. Topics: Ron's backstory Early days doing business in China Building relationships Negotiating in China How does the industry 10X supply? Ron's spodumene price “crystal ball” Global Lithium Resources' two projects Building the team at GL1 Partners Offtake Going downstream? Doing mining differently Geopolitics A question for Ron from Tara Berrie Rapid fire from me as well as Alex Cheeseman Closing comments on China's "interesting" price narrative

The Global Lithium Podcast
Episode 149: Peter Oliver

The Global Lithium Podcast

Play Episode Listen Later Nov 26, 2022 46:02


Peter Oliver has been one of the most influential people in the lithium industry over the past two decades. He spent 18 years at Talison – 12 years as CEO/Managing Director and later was a Non-Executive Director. Peter guided Talison through the acquisition by Tianqi Lithium in 2013 and then served as an advisor to Tianqi when they sold 49% of Talison to Rockwood (now Albemarle) and later acquired 24% of SQM. He was a founding director of Tianqi Lithium Australia, a wholly owned subsidiary of Tianqi that was established to build the first hydroxide conversion facility in Australia. Peter remained a director until June 2021. Few have the breadth of experience and understanding of the global lithium markets. Peter has recently returned to the lithium world as a Non-Executive Director of Latin Resources. Topics: The history of Greenbushes and the transition from tantalum producer to dominant hard rock lithium mine. The lithium industry before the lithium ion battery China's rise as a lithium chemical converter based on Greenbushes spodumene. Competing with brine based lithium in China Lithium industry structure and why it needs to evolve to supply the energy transition The reason China continues to lead the world in lithium chemicals production Lithium geopolitics Developing talent Lepidolite – why development has been limited and its potential in the future Staying under the radar Coming back to the industry / joining the Latin Resources board The lithium opportunity in Brazil Price Rapid fire

Boardroom Governance with Evan Epstein
Henry Sanderson: Volt Rush, the Winners and Losers in the Race to Go Green.

Boardroom Governance with Evan Epstein

Play Episode Listen Later Nov 14, 2022 58:50


0:00 -- Intro.2:10 -- Start of interview.3:00 -- Henry's "origin story". His other book "China's Superbank: Debt, Oil and Influence - How China Development Bank is Rewriting the Rules of Finance") (2012)5:03 -- His current role at Benchmark Mineral Intelligence.6:09 -  The origin of his book Volt Rush: The Winners and Losers in the Race to Go Green (2022).10:09 --  On the new battery age and the origin of lithium-ion batteries for EVs.12:53 -- On Contemporary Amperex Technology (CATL) and its founder Robin Zeng.18:34 -- On the Chinese lithium industry and its champions Ganfeng Lithium and Tianqi Lithium. "They had a golden period where they could pick up assets globally, but now the West is catching up." Example: Government of Canada orders the divestiture of investments by foreign companies in Canadian critical minerals companies.21:10 -- About Tianqi's $4bn acquisition of SQM's stake in Chile. [Disclosure: I wrote about this case in 2018 here, here and most recently in my latest newsletter, here.] On the future of the Lithium Triangle (Chile, Argentina and Bolivia) for the global lithium supply chain. The unclear future of lithium in Chile, the government has hinted on the creation of a new Chilean national lithium company. "It's a once in a 100-year opportunity, are they just going to sit back and lose out on market share? This opportunity does not come very often."27:09 -- On the new US industrial policy to foster the EV and battery industry (and divest from China). The Bipartisan Infrastructure Law, CHIPS & Science Act, and the Inflation Reduction Act (“the single largest investment in climate and energy in American history”) combined will invest more than $135 billion to build America's EV future, including critical minerals sourcing and processing and battery manufacturing. The impact for the global supply chain, particularly in Latin America, Africa and rest of the world.33:03-- On geopolitics, ESG and sustainability of the global battery supply chain and EVs generally. The problem of greenwashing. Amnesty International's report on Cobalt in Africa (2016) "This is What We Die For" (on human rights abuses in the Democratic Republic of the Congo and the global trade in Cobalt). "Chinese consumers are also getting more environmentally conscious."38:02  -- On the challenges of the energy transition from ICE vehicles to EVs. The importance of renewable energy. "Clean energy clusters will become very important."40:09  -- On energy security, cleaner battery producers (example Northvolt from Sweden), the rise of Gigafactories, the shift to EVs from global OEMs (A Reuters analysis of 37 global automakers found that they plan to invest nearly $1.2 trillion in electric vehicles and batteries through 2030) and the future of jobs in this industry. "Vehicle manufacturing employment, which stands at 13.6 million globally, already employs 10% of its workforce in the manufacture of EVs, their components and batteries." (see IEA world energy employment report). "It is a race for the jobs of the future, and that's where the West has lost out. That's what making this industry so critical." "But the West will definitely catch up, I'm very optimistic about the U.S."46:03 -- On whether the U.S. will encourage more mining in the US to bridge this gap. "The mining industry has not done a good job at convincing the public that this is what is needed. People who support clean energy find it hard to support mining. That's the crux of the issue."48:14 -- On Tesla, and whether they will move upstream in the supply chain with more refining or mining. And their China operations and supply chain dependence.53:19 -- The 1-3 books that have greatly influenced his life:The Quiet American, by Graham Greene (1955)Books by Somerset MaughamDeng Xiaoping and the Transformation of China, by Ezra Vogel (2011)Other books he recommends on the battery global supply chain:Bottled Lightning: Superbatteries, Electric Cars, and the New Lithium Economy, by Seth Fletcher (2011)The Powerhouse: America, China, and the Great Battery War, by Seth Levine (2016)The Shadows of Consumption: Consequences for the Global Environment, by Peter Dauvergne (2008)55:28 -- Who were your mentors, and what did you learn from them? Michael Forsythe, now with the NYT. When he was in China working for Bloomberg, working with investigative journalists.56:23 -- Are there any quotes you think of often or live your life by? "Sooner or later...one has to take sides – if one is to remain human." by Graham Greene.57:18 --  The person he most admires: Greta Thunberg.Henry Sanderson is a journalist and author of Volt Rush, the Winners and Losers in the Race to Go Green. He's currently an Executive Editor at Benchmark Mineral Intelligence, the leading provider of data and information on the battery industry. Before that he covered commodities and mining for the Financial Times for seven years in London. He was previously a reporter for Bloomberg News in Beijing, where he co-authored a book about China's financial system and state capitalism, China's Superbank. He grew up in Hong Kong and lived and worked in China for seven years.  __ You can follow Henry on social media at:Twitter: @hjesanderson__ You can follow Evan on social media at:Twitter: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License

The Context
Tianqi Explosion: The Mystery of Beijing's Big Bang

The Context

Play Episode Listen Later Sep 16, 2022 15:13


Equivalent to an atomic bomb in terms of its destructive power, the Tianqi Explosion is considered one of three major mysteries in recorded history yet to be solved. If you're into solving puzzles, the others two are a 3,600-year-old event called Mound of the Dead that occurred in ancient India and Russia's Tunguska Event that occurred in 1908. Today, we're going to tell some stories concerning a cataclysmic explosion that turned much of Beijing to rubble nearly 400 years ago. Historians estimate that over 20,000 people were killed or injured, but the mystery of what happened has yet to be solved.

The Global Lithium Podcast
Episode 143: Listener Questions

The Global Lithium Podcast

Play Episode Listen Later Aug 28, 2022 57:36


In this episode I briefly speak about why I believe the "Big Banks" have, for the most part, gotten their recent price forecasts wrong. The rest of the episode is answering listener questions: Topics: The Inflation Reduction Act Permitting DOE Loan Program DLE Canada's future in lithium The EU vs North America in battery supply chain development Companies mentioned: SQM, Albemarle, Ganfeng, Tianqi, Lithium Americas, Pilbara Minerals, Mineral Resources, Rio Tinto, Green Technology Metals, Tesla, Galan Lithium, Livent, Allkem, Wesfarmers, Frontier, Critical Elements, E3 and several more

The Global Lithium Podcast
Episode 142 Anand Sheth & Roland Chavasse

The Global Lithium Podcast

Play Episode Listen Later Aug 18, 2022 72:16


I interview the co-founders of the International Lithium Association (ILiA). Anand, known as “The Lithium Raj,” has deep lithium experience. Roland has experience running other major industry associations. We start off discussing Anand's experience as a lithium pioneer marketing spodumene from Australia (Greenbushes) before discussing the International Lithium Association's reason for being. Follow them on Twitter: @ILiA_lithium & Linked In The ILiA website: https://lithium.org/ Topics: The early days & growth of the lithium market in China Building relationships & negotiating in China The early days of Tianqi & Ganfeng. The transition from spodumene use in glass & ceramics to chemical conversion How China competed with low cost brine imports from South America China's rise in battery Moving from Talison to Galaxy & later Pilbara Minerals The thinking behind the creation of the International Lithium Association Attracting the major lithium players as members The Association's areas of focus and challenges Serving members & the general public What does ILiA look like in 2027? Rapid fire with the first "do over" in GLP history

MONEY FM 89.3 - Your Money With Michelle Martin
Market View: People's Park Centre S$1.8b sale, GameStop, DoorDash, Tianqi Lithium, Netflix, Apple, Higher rates to quell inflation, Currency Market impacts

MONEY FM 89.3 - Your Money With Michelle Martin

Play Episode Listen Later Jul 7, 2022 16:24


People's Park Centre is up for collective sale with a S$1.8b reserve price! But will this attempt to sell the People's Park Centre have a greater chance of success than the last one? On a separate note, with the US Feds agreeing that sharply higher rates may be needed to quell inflation, what does this mean for the economy moving forward? Michelle Martin and Ryan Huang find out. See omnystudio.com/listener for privacy information.

EV News Daily - Electric Car Podcast
23 May 2022 | Hyundai Motor Group to invest $5.5b in Georgia factory

EV News Daily - Electric Car Podcast

Play Episode Listen Later May 23, 2022 16:15 Very Popular


Show #1476 Good morning, good afternoon and good evening wherever you are in the world, welcome to EV News Daily, your trusted source of EV information. It's Monday 23rd May. I'm Blake Boland, and I've gone through every EV story today so that you don't have to! Lithium giant Tianqi to form JV with NIO's semi-solid-state battery supplier WeLion -Tianqi is one of China's largest lithium producers, with a battery-grade lithium metal capacity of about 2,900 tons in 2020, accounting for almost half of China's total lithium metal capacity. - Lithium metal has a very small density of 0.534 g/cm3 and a capacity of up to 3,860 mA-h/g, ten times that of the current graphite cathode material (372 mA-h/g), with a higher energy density, WeLion said. -Nio unveiled plans to offer 150 kWh semi-solid-state batteries when it unveiled its flagship sedan, the ET7, at the NIO Day 2020 event on January 9, 2021, generating much attention for this new battery. -NIO never disclosed the supplier of the battery, however, on March 27, WeLion chief scientist and founder Li Hong that the company is the supplier of the electric vehicle maker's solid-state battery. Original Source : Lithium giant Tianqi to form JV with NIO's semi-solid-state battery supplier WeLion - CnEVPost BYD Seal gets 22,637 orders in 7 hours of pre-sale - As of 10 pm on May 20, pre-sale orders for the Seal reached 22,637 units, just seven hours after pre-sale officially began at 3 pm, according to information shared by BYD today. - At a time when Chinese car company sales generally plunged in April, BYD saw its NEV sales reach 106,042 units, the second consecutive month of more than 100,000 units. - In a conservative scenario, BYD expects it to sell 1.5 million units in 2022, with sales expected to reach 2 million if supply chain conditions improve, according to the minutes of a previous meeting. Original Source : BYD Seal gets 22,637 orders in 7 hours of pre-sale - CnEVPost Hyundai Motor Group to build $5.54B EV plant and battery factory in Georgia - Hyundai Motor Group (HMG) entered into an agreement with the State of Georgia to build its first dedicated full electric vehicle and battery manufacturing facilities in the US. The new EV plant and battery manufacturing facilities represent an investment of approximately US$5.54 billion. Non-affiliated Hyundai Motor Group suppliers will invest approximately another $1 billion in the project. - The new facility will break ground in early 2023 and is expected to begin commercial production in the first half of 2025 with an annual capacity of 300,000 units. The battery manufacturing facility will be established through a strategic partnership, the details of which will be disclosed later.  - The EV and battery manufacturing plant will be located on a dedicated 2,923-acre site in Bryan County, Georgia (the Bryan County Megasite), with immediate access to I-95 and I-16 highways which creates easy access to 250 major metro areas. It is less than 50 kilometers (31 miles) from the Port of Savannah, the single-largest and fastest-growing container terminal in the US with two Class I rail facilities on-site. Rail service to the site is provided by Georgia Central Railway, a short line railway that connects to CSX in Savannah and Norfolk Southern near Macon in Middle Georgia. Original Source : Hyundai Motor Group to build $5.54B EV plant and battery factory in Georgia - Green Car Congress Nissan & Mitsubishi present compact EV for Japan - Nissan and Mitsubishi have presented a jointly developed electric small car for the Japanese market. It is marketed by Nissan under the name ‘Sakura' and by Mitsubishi as ‘eK X EV' and offers a range of 180 kilometres, according to Japanese WLTP standards.  - The Sakura runs on an electric drive system producing 47 kW and 195 Nm of torque. A 20 kWh Lithium-ion battery allows for the previously mentioned up to 180 km range, while the top speed is set at 130 km/h. Charging takes about 8 hours with a “standard” AC charge, and can be done as quickly as 40 minutes for a “warning light to 80 per cent” charge on a fast charger. A V2H reverse charging capability is also included.  - Kei-cars are popular in Japan, accounting for about 40% of the car market, and are limited to 3.4 meters in length, 1.48 meters in width and 2.0 meters in height.  Original Source : Nissan & Mitsubishi present compact EV for Japan - electrive.com Tesla Is Building a 'hardcore' Litigation Department to Seek Justice - Tesla is building a hardcore litigation department whose main goal will be to seek justice, not victory at any cost. Elon Musk said the department will never seek victory if a company is justly sued, but will never give up if the case is unfair.    - Tesla CEO Elon Musk announced that the company is building a hardcore litigation department that will directly initiate and execute lawsuits. He said the team would report directly to him. The head of the company made two commitments about how the department would work: The department will never seek victory in a just case against the company, even if it will probably win. The department will never surrender/settle an unjust case against the company, even if it will probably lose. - Those wishing to apply should send their CV to justice@tesla.com. A CV should contain 3 to 5 bullet points describing evidence of exceptional ability. In addition, Musk asked to include links to cases that the candidates tried. He is looking for real fighters who are ready to fight in the battles for justice in the courtroom. - ‘There will be blood' Original Source : Tesla Is Building a 'hardcore' Litigation Department to Seek Justice, (tesmanian.com) Sysco Intends To Buy Up To 800 Freightliner eCascadia - According to the Letter of Intent (LOI), the fleet would be deployed gradually between 2022 and 2026, with the first eCascadia delivery expected to arrive at Sysco's Riverside, California site later this year.  - In the case of Sysco, the vehicles will be combined with refrigerated trailers, but the press release does not clarify whether the trailer will be powered from the main battery to fully utilize the EV potential. - In the long-term, Sysco intends to electrify 35% of its fleet by 2030. The site at Riverside, California already is in a process of expansion of charging infrastructure and additional solar capacity installations. Freightliner eCascadia (Class 8 tractor) specs: -          up to 230 miles (370 km) of range -          Tandem drive and 438 kWh battery: typically 220 miles (354 km) -          Single drive and 438 kWh battery: typically 230 miles (370 km) -          Single drive and 291 kWh battery: typically 155 miles (249 km) Original Source : Sysco Intends To Buy Up To 800 Freightliner eCascadia (insideevs.com) EV Surge Likely After Labor Wins In Australia - ‘'Yesterday (Saturday, May 21) witnessed a historic Labor Party win in Australian federal politics. For 10 years, the Liberal federal government has denied climate change science and slow-walked the transition to renewable energy. Yesterday's historic defeat for this coalition will change all that and likely lead to an EV surge.''  - They are not hanging about — Labor's Electric Car Discount will begin on 1 July 2022, the beginning of the new financial year, and only 5 weeks away! - ‘'Australians love their cars. Multiple car ownership is common (I used to own three — one for me, one for the wife, and one as a toy). Because of this, passenger cars make up almost 10 per cent of Australia's CO2 emissions. To move Australians from fossil fuel burning cars to electric vehicles powered by renewable energy, Labor proposes exempting EVs from the 5% import tax and the 47% fringe benefits tax (a similar move to the UK government, which led to a spectacular increase in uptake).'' Original Source : EV Surge Likely After Labor Wins In Australia - CleanTechnica EVs are avoiding about 3% of global oil demand - ‘Plug-in vehicles avoided roughly 1.5 million barrels of oil per day last year, according to new analysis from Bloomberg New Energy Finance. That's about one-fifth of Russia's pre-invasion oil exports, Bloomberg NEF said.' - ‘The oil use avoided by EVs has also doubled since 2015, to about 3% of global demand, according to the analysis.' - While electric cars tend to get most of the attention, the analysis found that other vehicle types accounted for the most oil avoidance. Electric two- and three-wheeled vehicles—which tend to be popular in Asia—accounted for 67% of the oil demand avoided in 2021, according to Bloomberg NEF. - Those vehicles had an outsized impact on oil demand. Next in rank were electric buses, which accounted for 16% of avoided oil demand, followed by passenger vehicles at 13%. The latter were the fastest-growing segment, Bloomberg NEF noted. Original Source : EVs are avoiding about 3% of global oil demand—a fifth of Russia's total exports (greencarreports.com) QUESTION OF THE WEEK WITH EMOBILITYNORWAY.COM What is your dream driveway?  But there are some rules: 2 or 3 vehicles, budget is $150,000 USD or equivalent wherever you are.  Email your answers to Martyn: hello@evnewsdaily.com    It would mean a lot if you could take 2mins to leave a quick review on whichever platform you download the podcast. PREMIUM PARTNERS PHIL ROBERTS / ELECTRIC FUTURE BRAD CROSBY PORSCHE OF THE VILLAGE CINCINNATI AUDI CINCINNATI EAST VOLVO CARS CINCINNATI EAST NATIONAL CAR CHARGING ON THE US MAINLAND AND ALOHA CHARGE IN HAWAII DEREK REILLY FROM THE EV REVIEW IRELAND YOUTUBE CHANNEL RICHARD AT RSEV.CO.UK – FOR BUYING AND SELLING EVS IN THE UK EMOBILITYNORWAY.COM/ OCTOPUS ELECTRIC JUICE - MAKING PUBLIC CHARGING SIMPLE WITH ONE CARD, ONE MAP AND ONE APP MILLBROOKCOTTAGES.CO.UK – 5* LUXURY COTTAGES IN DEVON, JUMP IN THE HOT TUB WHILST YOUR EV CHARGES

The Translated Chinese Fiction Podcast
Ep 66 - Ba Jin and Hong Kong Nights with Luo Tianqi

The Translated Chinese Fiction Podcast

Play Episode Listen Later Jan 24, 2022 131:39


The rocking of the boat created the illusion that all the lights were moving In the sixty sixth episode of The Translated Chinese Fiction Podcast we are adrift in Hong Kong Nights (香港之夜 / Xiāng Gǎng Zhīyè), as fleetingly recollected by Sichuan's long-surviving left-anarchist writer, Ba Jin. Joining me in the constellations is fellow Sino-lit podcaster Luo Tianqi – here to talk about revolution, regret, and responsibility. Grab a seat on deck, comrade, brush up on your Bakunin, and let go of your transient identity as sights become sounds, and sounds become sights. - // NEWS ITEMS // Lu Xun's The True Story of Ah Q to broadcast on BBC Radio 4 on January 26 The Sons of Red Lake and Chinese Elemental Philosophy - Sinoist Books' blog The Way Spring Arrives gets a beautiful Chinese edition A dialogue between past and future show guests Mike Fu and Jenna Tang KakaoEntertainment buys Wuxiaworld - plus thoughts from the translators - // WORD OF THE DAY // (家 - jiā - home/family) - // MENTIONED IN THE EPISODE // Tianqi's podcast Various translated Ba Jin essays available on Anarchy Archives Tianqi's musical pairing: Hong Kong Nights 香港之夜 by Teresa Teng 鄧麗君 Angus' musical pairing: Hieroglyph by Cynic Dissenting from Ba Jin by Geremie Barmé // Available on DOUBAN and GDOCS Family by Ba Jin (trans by Sidney Shapiro) Fiona and Jane by Jean Chen Ho Olga Dies Dreaming by Xochitl Gonzalez Social Class in the 21st Century by Mike Savage The Meaning of Life: A Very Short Introduction by Terry Eagleton - // Handy TrChFic Links // The TrChFic mailing list Episode Transcripts Help Support TrChFic The TrChFic Map INSTAGRAM // TWITTER // DISCORD // HOMEPAGE

Dongfang Hour - the Chinese Aerospace & Technology Podcast
Chongqing Builds Space-based Solar Power Base, EO Analytics Startups & Henan Disaster Relief Effort, Launch of Yaogan Satellites and Tianqi-15 on a Long March 2C - Ep 43

Dongfang Hour - the Chinese Aerospace & Technology Podcast

Play Episode Listen Later Jul 26, 2021 26:59


Hello and welcome to another episode of the Dongfang Hour China Space News Roundup! A kind reminder that we cover a lot more stories every week in our Newsletter (newsletter.dongfanghour.com).This week, we discuss:China officially kicks off the construction of the country's first space-based solar power industry base in ChongqingChinese Earth observation data analytics startups participate in Henan disaster relief effort after the recent floodsA new series of Yaogan satellites launched onboard a Long March 2C, with Tianqi-15 as a secondary payload.Thank you for your kind attention, we look forward to seeing you next time. Also, don't forget to follow us on YouTube, Twitter, or LinkedIn, or your local podcast source. And please give us a thumbs-up!

The Way through Baguazhang - 八卦掌道
244. On being a Living Tao - 活道 (Part 8)

The Way through Baguazhang - 八卦掌道

Play Episode Listen Later Jun 27, 2021 4:00


So it's a Sunday morning and I am staring out an open window. The leaves on the trees across the road are swaying gently, and I can't see a single cloud in the sky from my limited viewpoint. I can hear some birds and the occasional train going past. And you'd be forgiven in thinking that I am spending lockdown in Sydney in some remote hideaway like a hermit. But I am not. I am at home in Meadowbank, Ryde crunching my weekly income and outgoing columns, and taking stock of my financial position of where I am and where I want to be in one week's time. Now! I am aware that what I have just spoken about, seems to have no bearing on the nature of being a “Living Tao” (活道) but it is precisely when things stop making logical sense, that the Tao reveals itself: What we perceive as order from the human perception is in fact an illusion. There is no order and at the same time there is no chaos or disorder. Both extremes are illusions. The Way of Virtue or Harmony is the natural alignment of things so that things just flow effortlessly. So having said this, I am grateful for this enforced two week holiday granted to me by the New South Wales government. Okay, so it was a bit last minute and brought about by the delta strain of the Coronavirus. But still, I choose to see it as a welcome respite from city living and an opportunity to live a little differently from the usual hum-drum. Starting with doing more of my self-created Tai-chi form. I'd had enough of others insisting that I must also do Tai-chi alongside my baguazhang 八卦掌, when all I want to be is a baguazhang master 八卦掌師. Tao-chi just isn't my thing. And so reading which way the wind is blowing like a good Tianqi master 天気大師, a couple of years ago I created my own Tai-chi Baguazhang form. And I called it (in Chinese) Tai Qi Zhang 泰極掌. The first character is the same Tai 泰 as in Thailand. With the second character I have kept the qi 極 character as is used in traditional Taiqiquan. The final character is the same Zhang 掌 as in baguazhang because as far as the movements are concerned I am still doing baguazhang. I am still walking the circle. And I am still doing my palm changes. Only this time much much slower with tai-chi-esque features incorporated into it. And now? Where am I now? Where I am now is that I no longer have a Tai-chi problem. I am what I am.

Dongfang Hour - the Chinese Aerospace & Technology Podcast
China's First Space Station Module get Launched, Creation of Megaconstellation Operator China SatNet, Significant Ride-Share Mission Launched on Long March 6 - Ep 31

Dongfang Hour - the Chinese Aerospace & Technology Podcast

Play Episode Listen Later May 2, 2021 30:03


Hello and welcome to another episode of the Dongfang Hour China Aero/Space News Roundup! Without further ado, the news update from the week of 26 April - 2 May.1) Chinese Space Station: launch of the core Module Tianhe on-board Launch March 5BOn April 29 2021, China successfully sent the first module of the Chinese Space Station into orbit. The module was a 22.5-ton core module called Tianhe (天和, or “heavenly peace”). It was launched aboard a Long March 5B.Tianhe is rightfully named the core module: it will be the centerpiece of the station which will host living quarters for the taikonauts, a bathroom, a kitchen, it will also be the main control unit (attitude, trajectory control), handles the fuel, power and air management systems. It is designed to host 3 taikonauts, and can hold up to 6 taikonauts during rotations. The Tianhe module will be joined next year by the Mengtian and Wentian experimental modules.In 2024, the Chinese space station will also be joined by a space telescope called Xuntian, which will not be physically connected to the station but will evolve in the vicinity, and docking only for maintenance purposes.2) Creation of a New Space SoE China SatNetMajor news update on Thursday 29 April which first came in the form of a press release from SASAC. The press release announced the creation of a China Satellite Networks Group Company, potentially SatNet for short, and puts that company under the direct administration of SASAC. SatNet is tasked with deploying and operating China’s LEO broadband constellation, widely speculated to be GuoWang. The creation of a SatNet company at this level of the SOE hierarchy is hugely significant. SatNet is, at least in theory, at the same level in the hierarchy as CASC, CASIC, and the big 3 telcos (all of which are also directly controlled by SASAC). If we compare this to the previous arrangement, you had China’s biggest broadband project being done by CASC (Hongyan), and another by CASIC (Hongyun), with both projects involving subsidiary companies with multiple shareholders (for example, Hongyan’s operating company, MacroNet, has shareholders including CASC, China Telecom, and CETC). This would have meant, presumably, that projects like Hongyan would have mostly used the technology of CASC, Hongyun would have used technology for CASIC, and the innovation and competition would have occurred in the long-term, after we find out how these constellations work.  On the other hand, the current situation with SatNet being at the same level as CASC, CASIC, the telcos, etc., means that it should, at least in theory, have a lot more freedom of choice in its sourcing options. 3) Long March 6 launches a Batch of 9 Smallsats into Orbit, many payloads of interestOn April 27th, China launched an impressive launch share mission on-board a Long March 6, putting 9 satellites into orbit. These satellites were:Qilu-1 and Qilu-4. Qilu-1 is a SAR satellite. Qilu-4 on the other hand is a high-resolution panchromatic EO satellite .Foshan-1 is a high res panchromatic EO satellite meant to be a technology verification platform of the Foshan-based Jihua Laboratory.Zhong’an Guotong-1 satellite (also called Hangsheng-1)Guodian Gaoke’s Tianqi-9 IoT satellite, developed by ASES Space.Origin Space’s NEO-1: a technology verification satellite, meant to trial the capture of a small celestial body and various orbital maneuvers.Golden Bauhinia-1-01 and -02 are two remote sensing satellites.Taijing-2 satellite, manufactued by Minospace, will be used primarily for remote sensing, and is the third satellite launched to be based on the MN-50 platform.  ---------------------------------------- Follow us on YouTube, LinkedIn, Twitter, as an audio podcast, and on our official website: https://www.dongfanghour.com/

Dongfang Hour - the Chinese Aerospace & Technology Podcast
Aero & Space Weekly News Round-Up - Ep.13 (21st - 27th Dec. 2020)

Dongfang Hour - the Chinese Aerospace & Technology Podcast

Play Episode Listen Later Dec 28, 2020 30:36


Hello and welcome to another episode of the Dongfang Hour China Aero/Space News Roundup! Without further ado, the news update from the week of 21-27 December:1) Long March 8 Launch (follow-up)The Long March-8 launched for the first time on 21 December, with the payload including 5 satellites. As we discussed in more detail on last week’s episode 12 of the China Aero/space News Roundup, the payload was diverse, with commercial companies, an Ethiopian satellite, and a couple of government/CAS payloads. Some additional information was publicized in the days following the launch that are worth noting.First, the Tianqi-8 satellite from Guodian Gaoke was apparently launched partly in partnership with Ping’an, possibly China’s largest insurer. Another satellite of note on the LM-8 launch was the Haisi-1 satellite from Spacety, which was a sort of mini-SAR satellite. SAR remains a relatively less-developed part of the EO sector in China, with optical being a lot more developed. 2) News announcements regarding the calendar of the Chinese Space StationOn December 25th, a ceremony was held in Changsha for the transfer of the Shenzhou-10 capsule to the Province of Hunan. Shenzhou 10 is a mission that took place in 2013.Before the ceremony, a journalist of China Space News had the opportunity to interview one of the high profile attendees of the day, Zhou Jianping, the chief engineer of China’s crewed spaceflight program. Zhou Jianping gave some very useful insights on the upcoming timeline for the Chinese Space Station.3) China Space News article on Moon resourcesChina Space News published a piece last week summarizing the economic benefits of lunar resources. While not much of the China Space News article is really news, it’s interesting to note that it comes a week after the end of the Chang’e 5 mission, which successfully returned lunar samples to Earth. It does suggest that one of the main objectives of the lunar exploration program is to investigate the viability of lunar resources mining. This has been reflected already in multiple interviews of Chinese space industry officials in the past (such as Bao Weimin or Ouyang Ziyuan).4) China Satcom further verticalizing & pivoting to mobile broadband: Interview of Sun Jing (GM of China Satcom)Interesting interview of Sun Jing on the 24th of December, discussing the changes in China Satcom’s strategy in recent years. Sun mentions in 2018, ChinaSat acquired the first-class national basic telecommunications license, at which time they chose to work alongside the telecom operators to cover areas not possible to cover with terrestrial fiber, as opposed to potentially competing with them as a telco. In particular, Sun emphasizes the extent to which ChinaSat is focusing on IFC, including the recent founding of Xinghang Hulian (星航互联), an IFC-focused subsidiary. 5) Completion of the static tests of the Chinese MA700 regional jetXi’an Aircraft Corporation completed the static testing of its next generation regional turboprop aircraft, the MA700 (MA = modern ark or 新舟 in Chinese), on December 19. With the completion of the static tests, the MA700 should be able to begin flight testing in 2021.This has been another episode of the Dongfang Hour China Aero/Space News Roundup. Thanks for watching, and we wish you a wonderful year in 2021!---------------------------------------------Follow us on YouTube, LinkedIn, Twitter (https://twitter.com/DongFangHour), as an audio podcast, and on our official website: https://www.dongfanghour.com/

Dongfang Hour - the Chinese Aerospace & Technology Podcast
Aero & Space Weekly News Round-Up - Ep.12 (14th - 20th Dec. 2020)

Dongfang Hour - the Chinese Aerospace & Technology Podcast

Play Episode Listen Later Dec 21, 2020 28:25


Hello and welcome to another episode of the Dongfang Hour China Aero/Space News Roundup! Without further ado, the news update from the week of 14-20 December. 1) Long March 8 Launch on Sunday 20/12Sunday morning (20/12/20) saw plans for the very first launch of China’s latest medium lift launch vehicle, the Long March 8, from Wenchang Launch Center. Unfortunately, the launch was scrubbed on Sunday due to a tropical storm, but we expect the launch to occur in the coming days. The payloads will be:- a classified EO technology satellite by CAST (XJY-7)- Ethiopian ETHSAT6U (cubesat), in cooperation with Beijing Smart Satellite- Guodian Gaoke’s Tianqi-12 satellite (for its IoT constellation)- 2 satellites of Spacety: one in cooperation with CETC (Haisi-1) and the other a technology verification satelliteThe LM8 is actually based on a set of rather proven building blocks, used in LM5-7, namely the kerolox YF-100 engines in it’s core stage and boosters (used also in LM 5, 6 and 7), and the YF-75 for the 2nd stage, a hydrolox engine employed already on the LM3 as well as the LM5.While the upcoming launch is going to be expendable, it is noteworthy that LM8 will (eventually) be CASC’s first attempt at reusability. As was unveiled in 2018, CASC plans to have the first stage, together with the two side boosters, perform vertical landing, throttling the engine and using grid fins for control during the landing stage. This will no doubt mean modifications on the YF-100 to enable the engines to run at lower thrust, to be restartable multiple times.2) Chang’e-5 Return to EarthChina’s Chang’e 5 return capsule returned to Earth in the evening of December 16th, after a perilous re-entry in the atmosphere. The capsule was located rapidly by search teams and helicopters (equipped with infrared cameras). This was live-streamed on Chinese media, and was absolutely fascinating to watch. On the 19th of December, a ceremony was held to remove the sample container and weigh the sample, which turned out was 1,731 kg as opposed to the expected 2 kg.3) New JV between China Eastern, Juneyao Group, and China TelecomOn December 16th 2020, China Eastern, Juneyao Group, and China Telecom jointly established a joint-venture called KDLink, which is to focus on connected aircraft. The press release doesn’t discuss too much in detail the projects of the JV (other than “product innovation, technology innovation and new application”). But China Eastern has been one of the pioneering airlines in China, with approximately half of the connected aircraft in China belonging to the Shanghai-based company. The joint-venture with Juneyao Airlines, another Shanghai-based airline that also operates connected aircraft, could mean a new wave of deployment of (satellite?) connectivity among both airlines’ fleets.4) Baidu is Considering Making its own Electric VehicleOr outsourcing to other manufacturers, as the case may be. Either way, an article appeared this week quoting “three people with knowledge of the matter” saying that Baidu is considering developing their own electric car through collaboration, potentially with Geely. Robin Li, CEO of Baidu, was the first billionaire in China to talk about the need to open the space industry to private investment, something he was doing as early as 2014. We thank you for your kind attention, and look forward to seeing you next time!---------------------------------------------Follow us on YouTube, LinkedIn, Twitter (https://twitter.com/DongFangHour), as an audio podcast, and on our official website: https://www.dongfanghour.com/

Dongfang Hour - the Chinese Aerospace & Technology Podcast
Aero & Space Weekly News Round-Up - Ep.6 (2nd - 8th Nov. 2020)

Dongfang Hour - the Chinese Aerospace & Technology Podcast

Play Episode Listen Later Nov 9, 2020 21:32


Welcome to another episode of the Dongfang Hour China Aero/Space News Roundup! This week we bring you updates on Galactic Energy, China’s maritime satcom industry and the product offerings therein, and China’s role in the regional EO market.1) Galactic Energy completed a RMB 200 million Series A funding round in September, a round that was announced last week. This is the company’s first round in ~11 months, having raised RMB 150M in Oct 2019. The company has now raised ~RMB 500 million across 4 rounds, a feat made even more remarkable by the fact that they were founded just under 3 years ago. Funding will go towards accelerating the development of the company’s Pallas-1 and Ceres-1 rockets. Galactic Energy is now quite likely one of the top 4 commercial launch companies in China, along with Landspace, iSpace, and Expace. This status in the “Big Four’ of China’s commercial launch sector was enhanced just a couple of days ago, when on November 7th 2020, Galactic Energy held the inaugural launch of its solid rocket the Ceres-1, becoming the 2nd private company in China to put a satellite into orbit after iSpace. Ceres-1, much like iSpace’s Hyperbola-1, is a small rocket with a capacity of 350 kg into LEO. While similar in propulsion technology, there are some definite slight differences between the two rockets (separation method, attitude control). Galactic Energy’s inaugural launch on the 7th was also noteworthy in that it was commercial, with the Ceres-1 rocket launching the Tianqi-11 satellite for Guodian Gaoke, a satellite manufacturer that plans to launch and operate the Tianqi constellation. 2) In the maritime space, we saw announced this week that the CASIC 2nd Academy, in partnership with the China Unicom Research Institute and the government of Zhoushan City, Zhejiang Province, completed China’s first “Low Orbit Broadband Satellite + 5G Maritime” test. While the article, which was originally published by the CASIC 2nd academy, does not explicitly mention Hongyun, the phrasing of LEO broadband satellite would almost certainly imply that the tests were using the Hongyun test satellite, launched in late 2018. Separately, as part of China’s 11/11 “Singles Day”, a huge online shopping day, we saw SinoSat release its 11/11 promotions for its Haixingtong maritime satcom service, which included 200MB of free data upon signing up, RMB 800 per year for unlimited voice. SinoSat is a ChinaSat subsidiary with focus on several high-value verticals with global requirements, i.e. maritime satcom, and has been building out a maritime satcom service for several years using satellite capacity from ChinaSat among others.3) Finally, the Asia-Oceania Group on Earth Observations (AO GEO) held a meeting in Changzhou early in the week. The event was attended by 15 countries and several international organizations. “China is playing an important role in the Asia-Oceania region, with the second highest number of remote sensing satellites in the world and its application of Earth observation shifting from experimental use to business services”, said Wang Qi’an, the director of the National Remote Sensing Center of ChinaThis has been another episode of the Dongfang Hour China Aero/Space News Roundup. If you’ve made it this far, we thank you for your kind attention, and look forward to seeing you next time! ---------------------------------------------Follow us on YouTube, LinkedIn, Twitter, and on our official website: https://www.dongfanghour.com/

Dongfang Hour - the Chinese Aerospace & Technology Podcast
Aero & Space Weekly News Round-Up - Ep.5: CCAF 2020 Special Edition (Part 2 of 2)

Dongfang Hour - the Chinese Aerospace & Technology Podcast

Play Episode Listen Later Nov 2, 2020 30:36


This week, we bring you updates on China’s Earth Observation sector, discussions on satellite 5G/6G and IoT, but first, part 2 of our summary of the 6th annual China Commercial Aerospace Forum, held in Wuhan 2 weeks ago. 6th China Commercial Aerospace Forum Summary (part 2/2): iSpace: One of the leading private launch companies in China, company VP Huo Jia at CCAF discussed their plans for Hyperbola-2, as well as post-Hyperbola-2 projects:Hyperbola-2: 100 km hopping experiments to test the landing/engine throttling/control capabilities by end of 2020; and then full orbital test of Hyperbola-2 if hopping tests are conclusive.Hyperbola-3: a medium lift rocket, which can be turned to heavy lift with 2 or 4 side boosters. Development of the Jiaodian-2 heavy thrust methalox engine (100 tons thrust).Zhongke Aerospace (aka CAS Space): technical discussion on launch vehicle control, also indicated that the first launch of ZK-1 would be in september 2021.EO companies with an increasing vertical approach to the industry: A number of EO companies have seen major traction over the past several years, and several of them spoke at the CCAF. Several EO companies are vertically integrating across different parts of the value chain.Related to EO, pretty good article published tby Satellite & Network (卫星与网络) about investment into the EO sector. It mentions that the ‘midstream’ services in EO are most popular for private investment, partly because compared to the other two main types of space applications (comms and satnav), EO is “more open to commercial enterprise and with stronger commercial flexibility”. Non-CCAF newsYaogan launch: a group of 3x Yaogan satellites were launched from Xichang this week. Yaogan is one of China’s largest EO initiatives, with the satellites believed to be focused on military applications. Piggy backing the Yaogan launch was the Tianqi-6 satellite of the Guodian Gaoke's 38-satellite IoT constellation.Other ConferencesChina Satellite Conference: The conference was standing room only, with a sold-out house at the Nikko Hotel in Beijing. Interesting takeaway was the sub-forum on Satellite 5G and 6G, which included CETC and could be an indication of CETC pushing more into space as a way of expanding their business.China Industrial IoT + 5G Forum Wuhan: at the forum, MIIT discussed the idea of industrial IoT + 5G. Ms Han Xia, Director of Information and Communication Administration Bureau of MIIT, mentioned 3 challenges faced by rollout of Industrial IoT:Uneven levels of digitization among enterprises. Insufficient industrial support capacity for things like gateways, terminals, etc.Integrated applications still need to be deepened/improvedA related article mentioned that in China, there have been more than 800 “Industrial IoT + 5G” projects commenced thus far, with a total investment of RMB 3.4 billion.

Caixin Global Podcasts
Caixin China Biz Roundup: Who Will China’s New Tech Export Restrictions Impact?

Caixin Global Podcasts

Play Episode Listen Later Sep 2, 2020 13:24


In today’s episode: New rules could affect drone and robot exports; top lithium producer, Tianqi, reports $101 million loss amid dwindling sales and prices; and the battle of the air conditioners heats up as Midea overtakes Gree.   SPECIAL OFFER: Great News! Caixin Podcast listeners can now enjoy a 7-day complimentary access pass to caixinglobal.com and Caixin app. This is a limited-time offer. Get your pass by heading to: https://www.caixinglobal.com/institutional-activity/?code=J3XVJC

The Global Lithium Podcast
Episode 76: Alison Dai

The Global Lithium Podcast

Play Episode Listen Later Aug 21, 2020 35:18


Alison Dai is the Commercial Director of Chengdu Chemphys, a specialty producer of high purity lithium chemicals. Although less well known than Chinese major producers Ganfeng and Tianqi, Chemphys was active selling high quality lithium chemicals to the difficult to penetrate Japanese and Korean markets long before their larger rivals. Alison was born in China, raised in Western Australia and left an investment banking career with JP Morgan Chase to return to Chengdu and become involved in the family lithium business. We discuss the current state of the lithium market and ponder when the the oversupply situation will turn to shortage. I ask Alison about Chemphys plans for both expansion and utilizing partnerships to grow in the international market. We explore the rise of lithium reprocessors, the challenge of upgrading industrial quality lithium to battery quality and opportunities for Chemphys to help brine producers implement direct lithium extraction. Alison gives her thoughts on the rise of high nickel cathodes and the recent resurgence of LFP cathode. And, of course, we close with rapid fire.

Weltraumbahnhof
WRB012 - Langer Marsch 4B / Ziyuan 3-3 & Tianqi 10 & Lobster Eye 1 (2020-07-25)

Weltraumbahnhof

Play Episode Listen Later Aug 5, 2020 4:28


The Way through Baguazhang - 八卦掌道

☳ Because the Baguazhang 八卦掌 master moves in a circle, Heaven blesses the master with the ways of the sky. The Swimming Dragon form embodies the typhoon. Round and round it goes; this way and that. So that when the master has learnt the ways of Qigong ☳ 氣功 and Gongfu ☵ 功夫, the next level is Tianqi ☶ 天気 mastery. ☵ To be a good Tianqiren or Weatherman, a person must be good at reading the weather and then be able to plot the future direction of one's current state of affairs to a profitable outcome. ☶ This is more than just predicting weather patterns and seeing weather forecasts on the nightly News - It is about knowing where the storm will hit and to what severity, so that when the king asks "How will it impact my kingdom?" The Weatherman can follow up the most likely scenarios with the potential opportunities to follow. Which is what the king actually wants to know. ☰ Cyclonic weather patterns, freakish electrical storms, and plummeting celestial fireballs traditionally heralded the end of kingdoms and dynasties, the world over. While it is easy to hide behind modern science and dismiss these things as superstitions. Very few people realise that the underlying motivations of why scientists like to study these things is that 1) It gives them a certain amount of power and authority in these matters, and 2) For the government of the day, to ignore the warnings these events bring, can spell calamity. For, it is not the disasters themselves that are the concern but the direct impact on the people themselves and in turn their reaction to the disasters, should the government be seen as incompetent. ☷ And it is here, where the Tianqiren diverges from the weatherman of the nightly News. It is the Tianqiren's role to be able to read The Will of the people and know like the ocean currents, how the people will react, by when and by how much. And all of this is born out of repeated observations of how people actually do things and not what they say. ☱ Like here in Australia for example, after two weeks of social distancing, the word on the street is that people are starting to have had enough of the "we're in this together stuff" and there is a feeling of being under some sort of voluntary martial law. Which means that the restrictions - while meaning well - is starting to remind the men of the days when Australia was a penal colony. And for the women, it's starting to feel like social rejection because social intimacy is not allowed (except immediate family). On top of that, some medical experts are starting to suggest that anybody who is outside must wear a mask at all times, which sounds a lot like certain countries around the world. ☲ Now at the moment, nobody is advocating anything extreme but a good Tianqiren knows and would advise their boss that trouble is brewing when people start calling for heads to roll. And given that I am only using the Coronavirus COV-19 as a real-life working example of it having existed for only 5 to 6 months, the change in the kingdom's state of affairs has been shockingly fast! ☴ I understand that for most Baguazhang practitioners, the term Tianqiren is a new one and I have to admit that I first came across it in author Fonda Lee's novel 'Jade City', but given that Baguazhang is of Chinese origin, it is a better term to use than the Mafia's consigliere or the Chinese word for sage Zhi 智, which can also mean Saint. One of the best historical examples was Zhuge Liang 諸葛亮 the strategist (181 - 234AD). A veritable great mountain in his own right.

Lithium-ion Rocks!
E22 - The Clash. Werewolves of L(ondon)ME Week Calling. BNEF, Rio & More!

Lithium-ion Rocks!

Play Episode Listen Later Nov 8, 2019 43:58


The tenure of Lithium-ion Rocks' co-host Howard Klein’s four day London visit for LME Week remained relatively bearish lithium and broader resource market as tariffs inflicted on aluminum and steel have had a real impact and ongoing trade war, slowing China GDP, negative EU yields, Brexit and impeachment narrative overhangs are all weighing on sentiment. And yet, kernels of optimism from cash flow machine Rio Tinto CEO J-S Jacques as 60%+ iron ore EBITDA margins and reduced gearing enables a vision built on decade of discipline and peering into a future challenged by complex geopolitics, disruptive technology and broader environment/societal considerations. SUSTAINABILITY. The LME's announced advisory committee for lithium price - including Tesla, Jaguar, BASF, Tianqi and Albemarle - revived "The Clash" among price reporting agencies and other forecasters. Bloomberg NEF's Sophie Lu and James Frith join Rodney and Howard with their bullish views on nickel-based cathode chemistries. "We see battery grade hydroxide that reach the specs needed to be very tight, even in the short term.... Around 40% of battery grade capacity are being developed not by Tier One producers...Any one project not fulfilling the time lines could result in short-term deficit..." Podcast Index 0- 12 - Takeaways & Excerpts from LME Week Battery Materials Presenters12-21 - Rio Tinto Key Note Speech Excerpts21-47 - Interview with BNEF's Sophie Lu and James Frith

Lithium-ion Rocks!
E21: After The (white) Gold Rush: E3 LiTHiuM. Alive-nt in Alberta

Lithium-ion Rocks!

Play Episode Listen Later Nov 5, 2019 48:34


Tesla Profits & Lithium Prophets As LME Week transitions toward Hotel California Benchmark Minerals Anodes/Cathodes and Deutsche Bank’s annual New York Lithium & Battery Supply Chain conference, Rodney Hooper shares thoughts on profitability - Tesla Q3 and Volkswagen’s long-term expectations - BMW direct battery materials sourcing, Tianqi’s overrun at Kwinana and the math behind $12K+ battery quality lithium pricing long-term. Howard comments on North American Rock - the evolving Carolina to Quebec Hydroxide Hub - and borate/lithium plays in Western USA. And let's not forget Brien Lundin’s 45th Annual New Orleans Investment Conference Halloween Weekend at which Benchmark’s Simon Moores & The BOSS Keith Phillips @PiedmontLithium will be the principal EV-angelizers of the lithium investment thematic. New Kid in Town: E3 Metals Corp (TSX-V: ETMC, CAD 10M market cap) After The (white) Gold Rush of 2016/2017, and with substantial shortages of battery quality lithium chemicals forecast by as early as 2022/23 by virtually every analyst who has examined the lithium business the longest and in greatest detail, Livent, the world’s fifth biggest producer and acknowledged technology leader, has executed their first meaningful investment outside their core resource in Argentina and processing facilities in North Carolina, China, India and the UK. Committing up to USD 5.5M with potential to convert to 19.9% ownership of E3 Metals, Livent looks to be making a low risk, potentially high reward foray to prove via a Joint Development Project that Direct Lithium Extraction/Ion-Exchange Process can commercially convert the lithium-rich Le Duc oilfield reservoir - 6.7M LCE inferred resource to date - to an initial 20,000 t battery quality lithium hydroxide production by 2023 and scale from there. In E21, “Alive-nt in Alberta: E3 LiTHiuM," LithiumIonRocks discuss with Chris Doornbos, CEO of E3 Metals the back story to this under-the-radar lithium developer, and expected milestones over the next 12-18 months and beyond on the road to potentially becoming a new, secure & sustainable supplier of what Volkswagen calls the “Irreplaceable Element for the Electric Era". Podcast Index: 0 - 2:37 - Introduction2:38 - 12: 15 - Rodney on Tesla, EU, $12K+ long-term BQ lithium price12:16 - 24:30 - Howard on Nemaska, Piedmont, Rio Tinto, ioneer, Senator Schumer’s EV plan24 - 50 - Q&A with Chris Doornbos, CEO E3 Metals @LithiumIonBull@RodneyHooper13libull.comwww.patreon.com/lithiumionrocks Not Investment Advice. Read Disclaimer. Do Your Own Research.

The Global Lithium Podcast
Episode 3: Rio Tinto, Tianqi, ORE, Goldman & Nemaska

The Global Lithium Podcast

Play Episode Listen Later Oct 23, 2019 17:43


It has been a busy week for announcements in the lithium world. I cover the topics in the title and answer a couple rapid fire questions. If you have been blocked by me on Twitter there is probably a good reason but feel free to voice message me via Anchor and tell me why I shouldn't have blocked you in the first place.

Lithium-ion Rocks!
E16: Fastmarkets #Lithium19. Santiago. LCE Meets LME

Lithium-ion Rocks!

Play Episode Listen Later Jun 24, 2019 51:56


Will Adams & Jon Mulcahy ⁦@FastmarketsMB banter with @RodneyHooper13 & @LithiumIonBull about Takeaways and Giveaways as #Lithium19 Santiago #Chileconcluded last Wednesday. Featuring: Sounds of Silence. Us & Them Timeline0:00 - 5:07: Introduction. VW ID Buzz. Sounds of Silence.5:08 - 6:15: Fastmarkets introduction6:16-10:35: Fastmarkets conference takeaways 10:36 - 12:10: Shouldn't OEMs lock in equity & off-take deals at current low prices?12:11 - 14:00: Cathode chemistry trends 14:01 - 17:10: Hydroxide and carbonate pricing 17:11-22:58: Tianqi, Ganfeng, Livent, Albemarle banter. Sustainability. Transparency. Us & Them 22:59 - 27:35: Europe & battery supply chain. Infinity. Keliber. Savannah.27:36 - 31:12: EV range, infrastructure, & carbon neutrality.31:13 - 35:26: General commentary & Fastmarkets overview 35:27 - 41:30: LME contract & price forecasts 41:31 - 44:50: Project funding44:51 - End: Trade war, EV sales & LME

Lithium-ion Rocks!
E7: Like a Rock(wood Lithium)

Lithium-ion Rocks!

Play Episode Listen Later May 14, 2019 51:12


In this episode, Lithium-ion Rocks! welcomes two esteemed ex-Rockwood Lithium senior executives. We ask Gerrit Fuelling, who oversaw $200M in lithium sales from Asia, whether $40,000 capital intensity per lithium hydroxide paid by Albemarle for JV with Mineral Resource at Wodgina is the new norm? Are Argentina brines really as low cost as advertised? Where might we see lithium chemical prices in 5, 10 years? Gerrit addresses these more and also shares thoughts on Albemarle, Livent, Ganfeng, SQM and Tianqi assets and M&A growth strategies. From KKR portfolio company to 4X+ return over the 2005-2015 10-year period, Rockwood Lithium is the source of Albemarle being one of the best positioned specialty chemical lithium producers on the planet. With continued bold bets on lithium will ALB CEO Luke Kissam pull off from the 2015-2025 10-year period similar returns to his ROC predecessor Seifi Gashemi? Tim McKenna shares tales from 10 years in an important government, investor and public relations role at Rockwood Lithium which included a battle and then partnership with China's Tianqi for the world’s best hard rock lithium mine at Talison/Greenbushes . Tim also share his success with the US Department of Energy during the Obama administration and his views on the current US government perspective about critical battery raw material security of supply.

How China Works
HAN TIANQI | Voices of the Future

How China Works

Play Episode Listen Later Apr 11, 2019 44:21


Han Tianqi is International Co-Chair and President of the China branch of the G20 Young Entrepreneur’s Alliance, CEO of APEC Voices of the Future (China), a Forbes "30 Under 30" honoree, and our guest this week on the show. Great talk!  MORE: https://www.howchinaworkspodcast.com/resources

SCOLAR on the Belt & Road
#10. Tianqi Han: on APEC, G20 & nurturing young entrepreneurs

SCOLAR on the Belt & Road

Play Episode Listen Later Mar 12, 2019 36:46


**Happy 10th episode to "SCOLAR on the Belt & Road"!** Our today's guest is a prominent young individual at the forefront of connecting young entrepreneurs across China and wider Asia-Pacific. **Tianqi Han**, President of the China Branch of G20 Young Entrepreneurs Alliance and CEO of the APEC Voices of the Future in China, was named as one of the 30 best social entrepreneurs under the age of 30 in 2018 by Forbes for his efforts in youth development. Besides gaining official recognition for the APEC Youth Report, growing institutional connections with leading businesses and individuals, and establishing the VOF Mentorship Program under APEC, he co-founded an Environmental Technology Company and served as founding co-chair for the Oxford Emerging Markets Summit. Over the next 30 minutes, you will hear how Tianqi's story started with the APEC Voices of the Future National Contest back in 2013, and his personal view on the importance of supporting cross-border interaction and mutual “ideational” nourishment of the youth, institutional support for the young people from the global-scale institutions and initiatives, such as APEC, G20 and B&R, as well as the challenges facing the young generation today. Enjoy!

SCOLAR on the Belt & Road
#10. Tianqi Han: on APEC, G20 & nurturing young entrepreneurs

SCOLAR on the Belt & Road

Play Episode Listen Later Mar 12, 2019 36:46


Happy 10th episode to "SCOLAR on the Belt & Road"! Our today's guest is a prominent young individual at the forefront of connecting young entrepreneurs across China and wider Asia-Pacific. Tianqi Han, President of the China Branch of G20 Young Entrepreneurs Alliance and CEO of the APEC Voices of the Future in China, was named as one of the 30 best social entrepreneurs under the age of 30 in 2018 by Forbes for his efforts in youth development. Besides gaining official recognition for the APEC Youth Report, growing institutional connections with leading businesses and individuals, and establishing the VOF Mentorship Program under APEC, he co-founded an Environmental Technology Company and served as founding co-chair for the Oxford Emerging Markets Summit. Over the next 30 minutes, you will hear how Tianqi’s story started with the APEC Voices of the Future National Contest back in 2013, and his personal view on the importance of supporting cross-border interaction and mutual “ideational” nourishment of the youth, institutional support for the young people from the global-scale institutions and initiatives, such as APEC, G20 and B&R, as well as the challenges facing the young generation today. Enjoy!

Business News - WA
Mark my words podcast 27 October 2017

Business News - WA

Play Episode Listen Later Oct 26, 2017 16:33


In this Business News podcast Mark Pownall, Matt Mckenzie and Tori Wilson discuss house prices, Tianqi, Silver Yacht sale, Perth lord mayor, women in business awards, and health care.

Business News - WA
Mark my words podcast 27 October 2017

Business News - WA

Play Episode Listen Later Oct 26, 2017 16:33


In this Business News podcast Mark Pownall, Matt Mckenzie and Tori Wilson discuss house prices, Tianqi, Silver Yacht sale, Perth lord mayor, women in business awards, and health care.