POPULARITY
N Engl J Med 2013;369:1115-23Background: The COURAGE trial was published in 2007. It compared up-front PCI to medical therapy alone in patients with stable CAD. Preventive PCI did not reduce the chance of dying or having a heart attack over a median follow up time of 5 years. The results rocked the cardiology world because for years prior to the publication of COURAGE, the standard of care called for revascularization of obstructive coronary stenosis. Despite what we would consider minor criticisms of COURAGE, the results have held over time as a preventive PCI strategy has failed repeatedly to reduce death or MI compared to medicine alone in subsequent large trials (BARI 2D, FAME 2, ISCHEMIA and ISCHEMIA-CKD) involving patients with stable CAD. But what about patients with acute coronary syndromes who have, a clearly defined “culprit” lesion and stable coronary stenosis of a non-infarct vessel? On the surface, the answer might seem simple - treat the “culprit” lesion with PCI and leave the stable disease alone. Continue optimal medical treatment of stable CAD indefinitely with consideration of revascularization only if new symptoms arise. But what if a stable coronary stenosis behaves differently in a patient with an acute coronary syndrome than in patients without it? Are these patients predisposed or particularly susceptible to acute plaque rupture and thrombogenesis to such an extent that they would benefit from a preventive revascularization strategy? The Primary Angioplasty in Myocardial Infarction (PRAMI) trial sought to test the hypothesis that immediate preventive PCI of non-culprit vessels plus the culprit vessel compared to culprit vessel only PCI would improve outcomes in patients with a STEMI and coronary stenosis of a non-infarct related artery.Cardiology Trial's Substack is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.Patients: From 2008 through 2013, patients were enrolled from 5 coronary care centers in the United Kingdom. Patients could be any age with acute STEMI and multivessel CAD detected at the time of emergency PCI. The trial was limited to patients with STEMI because ST-segment elevation, unlike ST-segment depression, localizes the area of ischemia in the myocardium and an “infarct-artery” is usually easy to distinguish. Clinically stable patients were considered for eligibility after undergoing PCI of the infarct artery while they were in the catheterization lab. They were eligible if successful PCI of infarct artery was performed and there was stenosis of 50% or more in one or more non-infarct arteries. Exclusion criteria included cardiogenic shock, previous CABG, had left main or significant disease in the ostia of both the LAD and circumflex vessels, or if the only non-infarct stenosis was a chronic total occlusion.Baseline characteristics: The trial screened 2,428 patients and randomized 465 patients (19%) with 234 to preventive PCI and 231 to no preventive-PCI. The majority of patients were excluded for single vessel disease (1122/1922 [58%]). The average age of patients was 62 years and more than 75% were men. Close to 50% were current smokers. The infarct artery was anterior in 35%, inferior in 60% and lateral in 5%. Approximately 65% of patients had 2 vessel disease and 35% had 3 vessel disease.Procedures: After completion of PCI in the infarct artery, eligible patients were randomized and those assigned to the preventive-PCI group underwent the procedure immediately in all non-infarct arteries with a coronary stenosis >50%. PCI was discouraged at a later date (sometimes this strategy is referred to as “staged PCI”) in the no preventive-PCI group unless it was symptom driven. Any patient in the trial with subsequent symptoms of angina that were not controlled with medicine was required to undergo objective assessment of ischemia to secure a diagnosis of refractory angina. Follow-up information was collected at 6 weeks and then yearly thereafter.Endpoints: The primary endpoint was a composite of death from cardiac causes, nonfatal MI, or refractory angina. Secondary outcomes included the individual components of the composite endpoint along with noncardiac death and repeat revascularization. Myocardial infarction was defined as symptoms of cardiac ischemia and a troponin level >99% URL. However, within 14 days after randomization, MI diagnosis also required ECG evidence of new STE or left bundle branch block and angiographic evidence of coronary artery occlusion (essentially this makes it so only in-stent thrombosis or spontaneous STEMI count and other causes of peri-procedural MI do not - this would bias the trial in favor of the preventive-PCI group).Refractory angina was defined as angina despite medical therapy and objective evidence of myocardial ischemia (i.e., ischemia on ECG during spontaneous episode of pain or abnormal results on functional testing).It was determined that 600 patients would be needed to achieve 80% power to detect a 30% relative reduction in the preventive-PCI group, at a 5% level of significance, assuming an annual rate of the primary outcome of 20% in the control group. Stopping criteria were prespecified if the results from the trial showed a primary outcome difference at the 0.001 level of significance. Results: The trial was stopped early based on a significant difference (P50%, preventive PCI significantly reduced a primary composite outcome of cardiac death, nonfatal MI and refractory angina in the PRAMI trial with an estimated NNT of 7 patients over 2 years. Individual components of the primary endpoint that were significantly reduced included nonfatal MI and refractory angina by similarly large margins. These results may seem impressive at first glance but we urge extreme caution in their interpretation. First, this is a relatively small trial with a historically large effect size, especially when considering hard endpoints like cardiac death and nonfatal MI were included. Such results are often later found to be falsely positive when larger, confirmatory studies are conducted. Second, the trial was stopped early and early stopping is prone to yield false positive and/or exaggerated results. Third, inclusion of refractory angina in the primary endpoint, an endpoint susceptible to bias in an unblinded study (see earlier discussion of “faith healing” and “subtraction anxiety” in FAME 2; consideration also must be given to nocebo effects in patients who know they have “untreated blockages”), clouds the main findings by inflating the effect size and making the trial susceptible to large differences in underpowered endpoints before sufficient data can be accumulated on hard outcomes. For example, if the trial had sought to detect a conservative difference of 30% in a primary composite endpoint that only included cardiac death or nonfatal MI, based on an event rate of 12% in the control group (the actual event rate in the trial), over 2,200 patients would be needed for 80% power at a 5% level of significance. The estimated number of actual events would be around 230. However, only 47 events occurred in PRAMI making the results highly susceptible to noise.While results of PRAMI suggest a beneficial role for preventive-PCI in patients with STEMI, more evidence is needed to confirm the results.Thanks for reading Cardiology Trial's Substack! This post is public so feel free to share it. Get full access to Cardiology Trial's Substack at cardiologytrials.substack.com/subscribe
Cloudflare built a global cache purge system that runs under 150 ms. This is how they did it. Using RockDB to maintain local CDN cache, and a peer-to-peer data center distributed system and clever engineering, they went from 1.5 second purge, down to 150 ms. However, this isn't full picture, because that 150 ms is just actually the P50. In this video I explore Clouldflare CDN work, how the old core-based centralized quicksilver, lazy purge work compared to the new coreless, decentralized active purge. In it I explore the pros and cons of both systems and give you my thoughts of this system. 0:00 Intro 4:25 From Core Base Lazy Purge to Coreless Active 12:50 CDN Basics 16:00 TTL Freshness 17:50 Purge 20:00 Core-Based Purge 24:00 Flexible Purges 26:36 Lazy Purge 30:00 Old Purge System Limitations 36:00 Coreless / Active Purge 39:00 LSM vs BTree 45:30 LSM Performance issues 48:00 How Active Purge Works 50:30 My thoughts about the new system 58:30 Summary Cloudflare blog https://blog.cloudflare.com/instant-purge/ Mentioned Videos Cloudflare blog https://blog.cloudflare.com/instant-purge/ Percentile Tail Latency Explained (95%, 99%) Monitor Backend performance with this metric https://www.youtube.com/watch?v=3JdQOExKtUY How Discord Stores Trillions of Messages | Deep Dive https://www.youtube.com/watch?v=xynXjChKkJc Fundamentals of Operating Systems Course https://os.husseinnasser.com Backend Troubleshooting Course https://performance.husseinnasser.com
Todos mis recursos con descuentos, links, cursos, consultas y TODO los encuentras en este unico link solo dale click y te lleva todo https://linktr.ee/dulcedagda Instagram ♧ https://www.instagram.com/dulcedagda/ sigue a @HolaYasmany Email de contacto dulcedagda@gmail.com 00:00] Introducción y bienvenida a Yasmani La anfitriona, Dulce, presenta al invitado Yasmani, creador de contenido de belleza y skincare. Yasmani inspiró a Dulce a abrir su podcast en 2019, después de que se conocieron colaborando en videos. [02:30] Inicios de Yasmani en la creación de contenido Yasmani cuenta que lleva más de 14 años creando contenido sobre belleza. Inició porque le apasionaba el skincare y comenzó a hacerlo cuando no tenía mucho que hacer en su vida. A pesar de ser una persona penosa, se animó a hablar frente a la cámara. [06:45] Viajes con sus seguidoras Yasmani comenzó a realizar viajes con sus seguidoras después de recibir solicitudes para organizar estos eventos. En los últimos dos años, ha organizado siete viajes, donde combina el turismo con recomendaciones de belleza. [09:40] Belleza como herramienta de confianza Explica que la belleza no solo es superficial, sino que es una herramienta para aumentar la autoestima y la confianza. Relata cómo su contenido ha impactado en la vida de muchas personas, ayudándolas a mejorar su autoestima. [12:00] Consejos de belleza esenciales Entre los principales consejos de Yasmani están el uso constante de protector solar y la importancia de masajes faciales y terapias con hielo, que son accesibles y efectivos para mejorar la piel. [14:15] Productos caros que valen la pena Recomienda la vitamina C de SkinCeuticals, un producto caro pero efectivo. También menciona la loción P50 de Biologique Recherche, conocida por mejorar la textura de la piel. [18:45] Aparatos para el cuidado de la piel Yasmani habla sobre la eficacia de los dispositivos caseros como la microcorriente y las máscaras LED, que, aunque no son tan potentes como los tratamientos médicos, sí ofrecen resultados visibles si se usan con constancia. [21:20] Rutina básica de skincare Sugiere una rutina básica para aquellos que recién empiezan: limpieza, hidratación y protector solar. Para quienes desean añadir más pasos, recomienda incluir un serum con vitamina C por las mañanas y retinol por las noches. [28:10] Reflexiones sobre la constancia en la creación de contenido Yasmani reflexiona sobre cómo la industria de la belleza ha cambiado con los años y la importancia de mantenerse auténtico. Aunque no es viral como antes, valora que sus seguidores sigan conectados con él a lo largo de los años. [33:00] Conclusión El episodio concluye con un agradecimiento a Yasmani por compartir su experiencia y consejos. --- Support this podcast: https://podcasters.spotify.com/pod/show/dulcedagda/support
The helium exploration and development company focused on helium deposits within the ‘Montana Helium Fairway', after the company announced the commencement of construction work on an access road and drill pad, at the Ingomar Dome project area in Montana. The latest Oak Securities note on Helix Exploration: Oak Securities – Helix Exploration (HEX) – Rudyard Acquisition (FINAL) Highlights · Construction commenced of 0.75-mile access road and drill pad at Ingomar · Two well Q3 2024 drilling program: Clink #1 well at Ingomar and Darwin #1 well at Rudyard · Drilling of Clink #1 well to commence following completion of civils · Combined budget cost of the two well programmes are estimated at approximately $4.1m which demonstrates the potential for the Company to operate cost-effective exploration and development within the state of Montana Timeline Construction on the access road and drill pad has commenced at the Clink #1 location within the Ingomar Dome Project. The access road has been surveyed at 0.75 miles, allowing Civils work to be completed relatively quickly over a short development distance. Following completion of Civils the Company targets to commence drilling operations in early-August: · Mobilisation and rig-up is anticipated to take approx. 1 week · Drilling is anticipated to take approx. 3-4 weeks · Wireline and well completion is anticipated to take approx. 1 week · Flow testing and appraisal is anticipated to take approx. 4 weeks The Company will keep investors updated as drilling progresses via RNS, potentially including updates on commencement of drilling, significant helium gas shows in drilling mud, wireline results, initial flow test results and full flow test results. Budget Quoted costs for drilling at Ingomar and Rudyard have come in below budget: The quoted cost for drilling and appraising the Clink #1 well at Ingomar is approximately $2,130,000 including extended well test but excluding contingencies. Significant savings were achieved using local suppliers from within the state of Montana, reducing mobilisation costs and standing-time charges. The estimated cost for drilling and appraising the Darwin #1 well at Rudyard is approximately $1,980,000 including extended well test but excluding contingencies. The cost of drilling at Rudyard is lower than Ingomar due to the shallower target depth at ~5,500ft at Rudyard compared to ~8,000ft at Ingomar. The Company benefits from relatively low drilling costs in Montana compared with other jurisdictions as well as from savings made by the Company in vendor selection. Expected savings to be achieved will help maintain adequate funding for planned development activities after completing the two-well Q3 2024 drilling campaign. These activities may include project pipeline growth, detailed plant engineering and construction financing. Helix Exploration is a helium exploration company focused on the exploration and development of helium deposits within the 'Montana Helium Fairway'. Founded by industry experts with extensive experience of helium systems in the US, the Company's assets comprise of 52 leases over the Ingomar Dome; a large closure of 16,512 acres with P50 unrisked prospective helium resource of 2.3Bcf and upside of 6.7 billion cubic feet. Historic drilling and/or testing has identified gas in all target reservoir horizons. Helix Exploration will focus on a drilling campaign and early production at the Montana Ingomar Dome Project. An aggressive development timeline will see a drilling campaign targeted for Q3 2024 and first helium production targeted for Q4 2025. Helix is committed to open and transparent communication with investors and the wider market as the project progresses through development. The Company's Admission Document, and other information required pursuant to AIM Rule 26, is available on the Company's website at https://www.helixexploration.com/.
HeLIX Exploration PLC (LSE:HEX) chairman David Minchin joins Proactive's Stephen Gunnion following the company's listing on London's AIM market, a significant milestone for the company. Minchin told Proactive that Helix Exploration is engaged in helium exploration in Montana, focusing on the Ingomar Dome, spanning over 16,500 acres. The company boasts a P50 resource estimate of 2.3 billion cubic feet of helium, with potential upside exceeding 6 billion cubic feet. This venture is substantially de-risked by historical drilling evidence of gas across target reservoir horizons. Notably, several formations have shown promising gas presence, setting a solid foundation for drilling activities slated for the third quarter of this year as the company aims for first production by end-2025. Minchin said the Initial Public Offering (IPO) was notably oversubscribed, raising £7.5 million against a target of £3.5 to £5 million, leading to a market capitalisation of approximately £12 million. This capital will finance the drilling and appraisal of a well, alongside an extended flow test. The significant interest is partly attributed to CEO and founder Bo Sears, a renowned figure in helium exploration with over 25 years of experience in the field. HeLIX Exploration aims to leverage the helium market's robust dynamics, highlighted by a compound annual growth rate (CAGR) of 20% in helium prices over the past decade and high demand from both Chinese importers and US end-users, Minchin said. The company plans to position itself advantageously within this market, particularly benefiting from the US Chips Act, which is expected to boost domestic demand for helium due to increased semiconductor manufacturing. Investors can anticipate several milestones in the near term, including the commencement of a scoping study, drilling activities in Q3, and the initiation of an extended flow test. These efforts are expected to culminate in significant developments towards the construction phase and the enhancement of HeLIX Exploration's project pipeline within the Montana helium fairway. #HelixExploration #HeliumMarket #AIMListing #MontanaHelium #EnergyExploration #IPOSuccess #NaturalGas #InvestmentOpportunity #ResourceEstimation #MarketDynamics #ProactiveInvestors #InsertCompanyName #InsertStockMarket #invest #investing #investment #investor #stockmarket #stocks #stock #stockmarketnews
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Impact of our $1M crowdfunded grant to the Center for Clean Energy Innovation, published by Hauke Hillebrandt on April 4, 2024 on The Effective Altruism Forum. Let's Fund researches pressing problems, like climate change, and then crowdfunds for nonprofits working on effective policy solutions. One such policy is clean energy innovation (e.g. via more grant funding for scientists to invent better solar panels). Making clean energy cheaper has many benefits because it reduces: Emissions Energy poverty Air pollution (which kills millions a year) Revenue for autocratic petrostates[1] Extreme climate risks (since if countries agreements to reduce emissions like Paris were to break down, cheaper clean energy hedges against this[2]) Since 2019, we've crowdfunded $1M for the Center for Clean Energy Innovation (CCEI) at ITIF, a non-profit think tank in DC. One example of our grantees work is researching the effects of higher and smarter clean energy R&D spending and communicating the results to policy-makers. Our research showed that this is the most effective climate policy[3] and was featured on Vox (which Bill Gates retweeted![4]). As a result, ~2000 donors crowdfunded a $1M+ for CCEI to do more think tank work (e.g. do research, talk to policy-makers, etc.). Here I show how with our grant, CCEI might have e.g. shifted >$100M from less effective clean energy deployment (e.g. subsidies) to more neglected and effective clean energy R&D. The donations might avert a ton CO for less than $0.10. That a leading think tank can cause such shifts becomes plausible, if we look at the pivotal ('hingey') timeline of a political climate so favorable that climate budgets went up by an unprecedented scale: 2020: Big Government Dems win the presidency, house and a razor-thin margin senate majority. Then a CCEI researcher gets a job advising Biden's climate envoy, John Kerry, who had endorsed and blurbed CCEI's Energizing America report and which has been called 'a very influential report', and advice for Biden on how to reform the energy innovation system.[5] 2021: COVID leads to a massive stimulus that includes ~$42B for clean energy RD&D, doubling the yearly budget- an ~$10B increase: [6] This US leadership led 16 countries to pledge ~$100B for the Clean Energy Technologies Demonstration Challenge recently. These increases were politically tractable thanks to tens of thousands of climate activists raising awareness worldwide. But CCEI is part of a much smaller coalition of only hundreds of key movers and shakers (others are: CATF, Carbon180, etc.[7]) that improved the quality of these spending increases by channeling them towards energy RD&D, which is ~10x more effective at ~$10/tC than deployment at ~$100/tC averted (more). Also, our $1M grant was ~2% of donations to US climate governance and a respectable 0.2% to all US think tanks.[8],[9],[10] Based on this, if we assume CCEI caused ~.1-10%[11] of the $10B-100B clean energy RD&D increases - then, our Monte Carlo model (see UseCarlo.com) suggests that CCEI averts ~.5Gt at ~$.002/tC:[12] Distribution p0 p10 p50 p90 UseCarlo.com output Notes / Source Energy R&D budget increase Metalog $0K $10B $42B $100B ~50Gt P10: US increases / y. P50: total stimulus. P90: global agreement CCEI's effect of shifting deploy$ to RD&D$ Metalog 0% 0.1% 2% 10% ~5% Guesstimate: CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D RD&D effectiveness Metalog $0 $3 $13 $41 ~$20/tC Review on the cost-effectiveness of energy R&D Deployment effectiveness Metalog $0 $0.1K $0.5K $1K ~$500/tC Levelized Cost of Carbon Abatement tC averted via R&D shift Output ~0.5Gt tC averted by R&D- Counterfactual tC averted by deployment Let's Fund grant ~$1M CCEI effectiveness ~$0.002/tC Donor effectiveness ~$0.02/tC Most...
Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ”Beyond Transformers & GPUs”, and “Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer
The P50 fire extinguisher, a revolutionary product in the fire safety industry, has faced significant challenges during its introduction. In an interview with Andy Spence (Joint Managing Director at Britannia Fire) he discusses the resistance faced from traditional fire extinguisher companies. Despite meeting all safety standards and receiving positive feedback from regulatory bodies, the P50 has been reported and faced opposition. However, its benefits, such as being corrosion-resistant, offering a ten-year guarantee, and reducing maintenance costs, have led to its success among end-users. The conversation also explores the need for innovation in the fire safety industry and the barriers that exist. Andy Spence also discusses the increasing concern around lithium ion battery fires. For more information about the amazing P50 see: P50 Fire Extinguishers | Britannia Fire | Low Maintenance | 20 Yr Life Span (britannia-fire.co.uk) To reach out to Andy see: (1) Andy Spence | LinkedIn
This is the second part of the Matador in Eddy County Deal Evaluation.Because of the enormous requests from investors evaluating oil and gas, we are starting a new series showing people how to assess oil and gas M&A or invest. Accredited investors, family offices, and E&P operators are our largest market, asking for these evaluation pieces of training.We want your feedback and recommendations for deals.Reach out to Stu and Michael at https://energynewsbeat.co/ to get your deal reviewed.Highlights of the Podcast00:20 - Divided into two parts, focusing on the Matador deal.00:45 - Explanation of setting up a new type curve for Wolfcamp A.01:08 - Understanding production curves normalized to time zero.09:15 - Discussion on the distribution of EURs (P10, P50, P90).14:21 - Incorporating strip pricing and natural gas liquids (NGL) data.16:37 - Creating individual forecasts for specific wells.19:30 - Incorporating production taxes and ownership interests.20:47 - Analyze cash flows and calculate the internal rate of return (IRR).22:02 - Determining the potential acquisition cost and assessing deal value.A shout-out to our sponsors! WellDatabase and ComboCurve.*We do not offer investment advice; you must contact your tax professional to get the appropriate tax information for your investments. This is only for educational purposes.
Zakázku na vývoj malého levného auta zadali svým konstruktérům soudruzi v bývalém východním Německu částečně i proto, aby zabránili odchodu obyvatel do bohatší Spolkové republiky Německo. Výroba aut inspirovaných západem se rozjela 7. listopadu 1957. Legendární vozítka dostala jméno Trabant, česky družice. A to na počest sovětské družice Sputnik 1, která byla vypuštěna na oběžnou dráhu Země měsíc předtím, něž bylo vyrobeno prvních 50 kusů trabantů modelu P50.Všechny díly podcastu Příběhy z kalendáře můžete pohodlně poslouchat v mobilní aplikaci mujRozhlas pro Android a iOS nebo na webu mujRozhlas.cz.
In this episode, we are joined by skincare queen Toska Husted, the Founder of Toska Spa & Facial Bar, and one of the world's leading experts in facial skincare. Toska has done facials for huge celebrities including the Kardashians, Jennifer Aniston, and Dua Lipa, and she is sharing all her best tips, tricks, and guidance on building the perfect skincare routine, maintaining healthy and glowing skin year-round, and her honest thoughts on botox and fillers! PLUS some super exciting news on Toska's expanding business! Enter to WIN a P50 and the Future Brightening Touch Serum! Leave a rating and review, and subscribe to The Cheeky Been on Apple Podcasts Screenshot your review and DM it to @thecheekybeenpodcast on IG Follow @thecheekybeenpodcast on IG Follow Toska over at @toska_europeanspa Products mentioned: Toska Spa & Facial Bar Treatments The Brightening Peel add-on is a fave! Remodeling Treatment Biologique Recherche Lotion P50 Range Colorscience Sunscreens Biologique Recherche Cleaners L'Eauxygénante Facial Mist Future Brightening Touch Serum Some key takeaways from this episode include: You could have an amazing facial treatment done at a spa, but if you don't implement a routine at home to maintain the results, your new glow won't last as long. Ask your facialist about how to maintain it. If you're super busy and brand new to skincare, Toska swears by a minimum three-step routine of cleansing, exfoliating, and applying SPF and a moisturizer. We spend a lot of time working on facial skincare, but don't forget your skin covers your entire body! It will always look best when you eat healthy foods and stay hydrated, and be sure to exfoliate and moisturize your whole body as well! Welcome to The Cheeky Been Podcast! I'm your host Vanessa Krombeen, the creator behind The Cheeky Been, a lifestyle blog that empowers women to feel their best through fitness, fashion, health and wellness, and motherhood tips. In this show, you will find thought provoking conversations with entrepreneurs, influencers, and brands you love and the answers to your most sought after questions. Connect with Me! Follow The Cheeky Been Follow The Cheeky Been Podcast The Cheeky Been Blog TikTok YouTube Business Inquiries: thecheekybeen@gmail.com
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.07.25.550472v1?rss=1 Authors: Brinkmann, P., Devos, J. V., van der Eerden, J. H., Smit, J. V., Janssen, M. L., Kotz, S. A., Schwartze, M. Abstract: Objective: Tinnitus denotes perception of a non-environmental sound and might result from aberrant auditory prediction. Successful prediction of formal (e.g. type) and temporal sound characteristics facilitates the filtering of irrelevant information (sensory gating, SG). Here, we explored if and how parallel manipulations of formal and temporal predictability affect sensory gating in persons with and without tinnitus. Methods: Age-, education- and sex-matched persons with and without tinnitus (N = 52) participated and listened to paired-tone oddball sequences, varying in formal (standard vs. deviant pitch) and temporal predictability (isochronous vs. random timing). EEG was recorded from 128 channels and data were analyzed by means of temporal spatial principal component analysis (tsPCA). Results: SG was observed in P50- and N100-like activity (amplitude suppression for the 2nd tone in the pair) in both timing conditions and groups. Correspondingly, deviants elicited overall larger amplitudes than standards. However, only in persons without tinnitus N100-like activity in response to deviants was enhanced with isochronous relative to random timing. Conclusions: Persons with tinnitus do not benefit similarly as persons without tinnitus from temporally predictable context in deviance processing. Significance: The current results indicate altered temporal sensitivity and selective attention allocation in persons with tinnitus. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Get to know some of the most important and innovative products, formulas, and trends Kobo will feature at in-cosmetics® Global 2023 in Barcelona, Spain. If you are attending the show, between March 28th and 30th, come visit Kobo Products at booth P50! Links: in-cosmetics Global website: https://www.in-cosmetics.com/global/en-gb.html ABOUT US: Since 1987 Kobo has provided innovative, technology-based raw materials to the cosmetic industry. The product range includes Surface Treated Pigments, Microspheres, Suncare and Color Dispersions, Silicone Fluids, Specialties, Natural Ingredients, Effect Pigments, Boron Nitride and Delivery Systems. Kobo has five locations, USA (Corporate Headquarters), France, Japan, Brazil , and the UK and is represented globally by independent agents. Learn more at: www.koboproductsinc.com
Headlines: NAIA, primary airport of the country, canceled and diverted flights on New Year | Increase in PhilHealth premium contribution this 2023, ordered suspended by President Marcos | Headphones with air filter to be released, priced at 749 pounds or over P50, 000Tagalog.com news podcast for Filipino/Tagalog language learnersYou can also listen with Tagalog transcript and English translations here: https://www.tagalog.com/podcast/play.php?podcast_id=129Listen to all our transcribed episodes here: https://www.tagalog.com/podcast/
This is the final episode on the farm's vice podcast for 2022. This episode we jump back into the GPA series talking about zed P50 and what that can do do 4 Australian grain producers. On this episode we talked to Andrew Wiedemann and also Steve Henry from CSIRO about the work that's being going on behind the scenes to ensure that farmers are getting access to the best defensive tools to pests on farm.www.Grainproducers.com.au for more informationIf you like this episode make sure you share and subscribe to it so that more farmers right across Australia can pass on their own piece of #FarmsAdvice. It's been the biggest year yet with the podcast as we clock up 150,000 downloads in 2 years.Keep on farming! Hosted on Acast. See acast.com/privacy for more information.
This month on Episode 42 of Discover CircRes, host Cynthia St. Hilaire highlights four original research articles featured in the October 28 and November 11th issues of Circulation Research. This episode also features an interview with Dr Miguel Lopez-Ramirez and undergraduate student Bliss Nelson from University of California San Diego about their study, Neuroinflammation Plays a Critical Role in Cerebral Cavernous Malformations. Article highlights: Jia, et al. Prohibitin2 Maintains VSMC Contractile Phenotype Rammah, et al. PPARg and Non-Canonical NOTCH Signaling in the OFT Wang, et al. Histone Lactylation in Myocardial Infarction Katsuki, et al. PCSK9 Promotes Vein Graft Lesion Development Cindy St. Hilaire: Hi, and welcome to Discover CircRes, the podcast of the American Heart Association's Journal, Circulation Research. I'm your host, Dr Cindy St. Hilaire from the Vascular Medicine Institute at the University of Pittsburgh, and today, I'm going to be highlighting articles from our October 28th and our November 11th issues of Circ Res. I'm also going to have a chat with Dr Miguel Lopez-Ramirez and undergraduate student Bliss Nelson, about their study, Neuroinflammation Plays a Critical Role in Cerebral Cavernous Malformations. But, before I get into the interviews, here are a few article highlights. Cindy St. Hilaire: The first article is from our October 28th issue, and the title is, PHB2 Maintains the Contractile Phenotype of Smooth Muscle Cells by Counteracting PKM Splicing. The corresponding author is Wei Kong, and the first authors are Yiting Jia and Chengfeng Mao, and they are all from Peking University. Insults to blood vessels, whether in the form of atherosclerosis, physical injury, or inflammation, can trigger vascular smooth muscle cells to transition from a contractile state to a proliferative and migratory one. Accompanying this conversion is a switch in the cells' metabolism from the mitochondria to glycolysis. But what controls this switch? To investigate, this group compared the transcriptomes of contractile and proliferative smooth muscle cells. Among the differentially expressed genes, more than 1800 were reciprocally up and down regulated. Of those, six were associated with glucose metabolism, including one called Prohibitin-2, or PHB2, which the team showed localized to the artery wall. In cultured smooth muscle cells, suppression of PHB2 reduced expression of several contractile genes. While in rat arteries, injury caused a decrease in production of PHB2 itself, and of contractile markers. Furthermore, expression of PHB2 in proliferative smooth muscle cells could revert these cells to a contractile phenotype. Further experiments revealed PHB2 controlled the splicing of the metabolic enzyme to up-regulate the phenotypic switch. Regardless of mechanism, the results suggest that boosting PHB2 might be a way to reduce adverse smooth muscle cell overgrowth and conditions such as atherosclerosis and restenosis. Cindy St. Hilaire: The second article I'm going to highlight is also from our October 28th issue, and the first authors are Mayassa Rammah and Magali Theveniau-Ruissy. And the corresponding authors are Francesca Rochais and Robert Kelly. And they are all from Marseille University. Abnormal development of the heart's outflow track, which ultimately forms the bases of the aorta and the pulmonary artery, accounts for more than 30% of all human congenital heart defects. To gain a better understanding of outflow tract development, and thus the origins of such defects, this group investigated the role of transcription factors thought to be involved in specifying the superior outflow tract, or SOFT, which gives rise to the subaortic myocardium, and the inferior outflow tract, which gives rise to the subpulmonary myocardium. Transcription factor S1 is over-expressed in superior outflow tract cells and the transcription factors, TBX1 and PPAR gamma, are expressed in inferior outflow tract cells. And now this group has shown that TBX1 drives PPAR gamma expression in the inferior outflow tract, while Hess-1 surpasses PPAR gamma expression in the superior outflow tract. Indeed, in mouse embryos lacking TBX1, PPAR gamma expression was absent in the outflow tract. While in mouse embryos lacking Hess-1, PPAR gamma expression was increased and PPAR gamma positive cells were more widespread in the outflow tract. The team also identified that signaling kinase DLK is an upstream activator of Hess-1 and a suppressor of PPAR gamma. In further detailing the molecular interplay regulating outflow tract patterning, the work will shed light on congenital heart disease etiologies, and inform potential interventions for future therapies. Cindy St. Hilaire: The third article I want to highlight is from our November 11th issue of Circulation Research, and the title is Histone Lactylation Boosts Reparative Gene Activation Post Myocardial Infarction. The first author is Jinjin Wang and the corresponding author is Maomao Zhang, and they're from Harbin Medical University. Lactylation of histones is a recently discovered epigenetic modification that regulates gene expression in a variety of biological processes. In inflammation, for example, a significant increase in histone lactylation is responsible for switching on reparative genes and macrophages when pro-inflammatory processes give way to pro-resolvin ones. The role of histone lactylation in inflammation resolution has been shown in a variety of pathologies, but has not been examined in myocardial infarction. Wang and colleagues have now done just that. They isolated monocytes from the bone marrow and the circulation of mice at various time points after induced myocardial infarctions, and examined the cells' gene expression patterns. Within a day of myocardial infarction, monocytes from both bone marrow and the blood had begun upregulating genes involved in inflammation resolution. And, concordant with this, histone lactylation was dramatically increased in the cells, specifically at genes involved in repair processes. The team went on to show that injection of sodium lactate into mice boosted monocyte histone lactylation and improved heart function after myocardial infarction, findings that suggest further studies of lactylation's pro-resolving benefits are warranted. Cindy St. Hilaire: The last article I want to highlight is titled, PCSK9 Promotes Macrophage Activation via LDL Receptor Independent Mechanisms. The first authors are Shunsuke Katsuki and Prabhash Kumar Jha, and the corresponding author is Masanori Aikawa, and they are from Brigham and Women's Hospital in Harvard. Statins are the go-to drug for lowering cholesterol in atherosclerosis patients. But the more recently approved PCSK9 inhibitors also lower cholesterol and can be used to augment or replace statins in patients where these drugs are insufficient. PCSK9 is an enzyme that circulates in the blood and destroys the LDL receptor, thereby impeding the removal of bad cholesterol. The enzyme also appears to promote inflammation, thus potentially contributing to atherosclerosis in two ways. This group now confirms that PCSK9 does indeed promote pro-inflammatory macrophage activation and lesion development, and does so independent of its actions on the LDL receptor. The team assessed PCSK9-induced lesions in animals with saphenous vein grafts, which are commonly used in bypass surgery but are prone to lesion regrowth. They found that LDL receptor lacking graft containing mice had greater graft macrophage accumulation and lesion development when PCSK9 activity was boosted than when it was not. The animal's macrophages also had higher levels of the pro-inflammatory factor expression. Together, this work shows that PCSK9 inhibitors provide a double punch against atherosclerosis and might be effective drugs for preventing the all too common failure of saphenous vein grafts. Cindy St. Hilaire: So, today with me I have Dr Miguel Lopez-Ramirez and undergraduate student Bliss Nelson from the University of California in San Diego, and we're going to talk about their study, Neuroinflammation Plays a Critical Role in Cerebral Cavernous Malformation Disease, and this article is in our November 11th issue of Circulation Research. Thank you both so much for joining me today. Before we talk about the science, want to just maybe tell me a little bit about yourselves? Bliss Nelson: My name is Bliss Nelson. I'm a member of Miguel Lopez-Ramirez's lab here at UC San Diego at the School of Medicine. I'm an undergraduate student here at UC San Diego. I'm actually a transfer student. I went to a community college here in California and I got involved in research after I transferred. Cindy St. Hilaire: What's your major? Bliss Nelson: I'm a cognitive science major. Cindy St. Hilaire: Excellent. You might be the first undergrad on the podcast, which is exciting. Bliss Nelson: Wow. What an honor. Thank so much. Cindy St. Hilaire: And Miguel, how about you? Miguel Lopez-Ramirez: Yes, thank you. Well, first thank you very much for the opportunity to present our work through this media. It's very exciting for us. My name is Miguel Alejandro Lopez-Ramirez, and I'm an assistant professor in the Department of Medicine and Pharmacology here at UCSD. Cindy St. Hilaire: Wonderful. I loved your paper, because, well, first, I don't think I've talked about cerebral cavernous malformations. So what are CCMs, and why are they so bad? Bliss Nelson: Cerebral cavernous malformations, or CCMs for short, are common neurovascular lesions caused by a loss of function mutation in one of three genes. These genes are KRIT1, or CCM1, CCM2 and PDCD10, or CCM3, and generally regarded as an endothelial cell autonomous disease found in the central nervous system, so the brain and the spinal cord. The relevance of CCMs is that it affects about one in every 200 children and adults, and this causes a lifelong risk of chronic and acute hemorrhaging. CCMs can be quiescent or dynamic lesions. If they are dynamic, they can enlarge, regress, or behave progressively, producing repetitive hemorrhaging and exacerbations of the disease. Other side effects of the disease could be chronic bleedings, focal neurological deficits, headaches, epileptic seizures and, in some cases, death. There's no pharmacological treatment for CCMs. There's only one type of option some patients may have, which would be to have surgery to cut out the lesions. But of course this depends on where the lesion or lesions are in the central nervous system, if that's even an option. So sometimes there's no option these patients have, there's no treatment, which is what propels our lab to towards finding a pharmacological treatment or uncovering some of the mechanisms behind that. Cindy St. Hilaire: Do people who have CCM know that they have them or sometimes it not detected? And when it is detected, what are the symptoms? Bliss Nelson: Sometimes patients who have them may not show any symptoms either ever in their lifetime or until a certain point, so really the only way to find out if you were to have them is if you went to go get a brain scan, if you went to go see a doctor, or if you started having symptoms. But also, one of the issues with CCMs is that they're very hard to diagnose, and in the medical community there's a lack of knowledge for CCMs, so sometimes you may not get directed to the right specialist in time, or even ever, and be diagnosed. Miguel Lopez-Ramirez: I will just add a little bit. It is fabulous, what you're doing. I think this is very, very good. But yes, that's why they're considered rare disease, because it's not obvious disease, so sometimes most of the patient, they go asymptomatic even when they have one lesions, but there's still no answers of why patients that are asymptomatics can become symptomatics. And there is a lot in neuro study, this study that we will start mentioning a little bit more in detail. We try to explain these transitions from silent or, quiescent, lesion, into a more active lesion that gives the disability to the patient. Some of the symptoms, it can start even with headaches, or, in some cases, they have more neurological deficits that could be like weakness in the arms or loss of vision. In many cases also problems with the speech or balance. So it depends where the lesion is present, in the brain or in the spinal cord, the symptoms that the patient will experience. And some of the most, I will say, severe symptoms is the hemorrhagic stroke and the vascular thrombosis and seizure that the patients can present. Those would be the most significant symptoms that the patient will experience. Cindy St. Hilaire: What have been some limitations in the study of CCMs? What have been limitations in trying to figure out what's going on here? Bliss Nelson: The limitations to the disease is that, well, one, the propensity for lesions, or the disease, to come about, isn't known, so a lot of the labs that work on it, just going down to the basic building blocks of what's even happening in the disease is a major problem, because until that's well established, it's really hard to go over to the pharmacological side of treating the disease or helping patients with the disease, without knowing what's going on at the molecular level. Cindy St. Hilaire: You just mentioned molecular level. Maybe let's take a step back. What's actually going on at the cellular level in CCMs? What are the major cell types that are not happy, that shift and become unhappy cells? Which are the key players? Bliss Nelson: That's a great question and a great part of this paper. So when we're talking about the neuroinflammation in the disease, our paper, we're reporting the interactions between the endothelium, the astrocytes, leukocytes, microglia and neutrophils, and we've actually coined this term as the CaLM interaction. Cindy St. Hilaire: Great name, by the way. Bliss Nelson: Thank you. All props to Miguel. And if you look at our paper, in figure seven we actually have a great graphic that's showing this interaction in play, showing the different components happening and the different cell types involved in the CaLM interaction that's happening within or around the CCM lesions. Cindy St. Hilaire: What does a astrocyte normally do? I think our podcast listening base is definitely well versed in probably endothelial and smooth muscle cell and pericyte, but not many of us, not going to lie, including me, really know what a astrocyte does. So what does that cell do and why do we care about its interaction with the endothelium? Miguel Lopez-Ramirez: Well, the astrocytes play a very important role. Actually, there are more astrocytes than any other cells in the central nervous system, so that can tell you how important they are. Obviously play a very important role maintaining the neurological synapses, maintaining also the hemostasis of the central nervous system by supporting not only the neurons during the neural communication, but also by supporting the blood vessels of the brain. All this is telling us that also another important role is the inflammation, or the response to damage. So in this case, what also this study proposed, is that new signature for these reactive astrocytes during cerebral malformation disease. So understanding better how the vasculature with malformations can activate the astrocytes, and how the astrocytes can contribute back to these developing of malformations. It will teach us a lot of how new therapeutic targets can be implemented for the disease. This is part of this work, and now we extend it to see how it can also contribute to the communication with immune cells as Bliss already mentioned. Cindy St. Hilaire: Is it a fair analogy to say that a astrocyte is more similar to a pericyte in the periphery? Is that accurate? Miguel Lopez-Ramirez: No, actually there are pericytes in the central nervous system as well. They have different roles. The pericyte is still a neuron cell that give the shape, plays a role in the contractility and maintains the integrity of the vessels, while the astrocyte is more like part of the immune system, but also part of the supporting of growth factors or maintaining if something leaks out of the vasculature to be able to capture that. Cindy St. Hilaire: You used a handful of really interesting mouse models to conduct this study. Can you tell us a little bit about, I guess, the base model for CCM and then some of the unique tools that you used to study the cells specifically? Bliss Nelson: Yeah, of course. I do a lot of the animal work in the lab. I'd love to tell you about the mouse model. So to this study we use the animal model with CCM3 mutation. We use this one because it is the most aggressive form of CCM and it really gives us a wide range of options to study the disease super intricately. We use tamoxifen-regulated Cre recombinase under the control of brain endothelial specific promoter, driving the silencing of the gene CCM3, which we call the PDCD10 betco animal, as you can see in our manuscript. To this, the animal without the Cre system, that does not develop any lesions, that we use as a control, we call the PDCD10 plox. And these animals are injected with the tamoxifen postnatally day one, and then for brain collection to investigate, wcollected at different stages. So we do P15, which we call the acute stage, P50, which we term the progressive stage, and then P80, which is the chronocytes stage. And after enough brain collections, we use them for histology, gene expression, RNA analysis, flow cytometry, and different imaging to help us further look into CCMs. Cindy St. Hilaire: How similar is a murine CCM to a human CCM? Is there really good overlap or are there some differences? Miguel Lopez-Ramirez: Yes. So, actually, that's a very good question, and that's part of the work that we are doing. This model definitely has advantages in which the lesions of the vascular formations are in an adult and juvenile animals, which represent an advantage for the field in which now we will be able to test pharmacological therapies in a more meaningful, way where we can test different doses, different, again, approaches. But definitely, I mean, I think I cannot say that it's only one perfect model for to mimic the human disease. It's the complementary of multiple models that give us certain advantages in another, so the integration of this knowledge is what will help us to understand better the disease. Cindy St. Hilaire: That's great. I now want to hear a little bit about your findings, because they're really cool. So you took two approaches to study this, and the first was looking at the astrocytes and how they become these, what you're calling reactive astrocytes, and then you look specifically at the brain endothelium. So could you maybe just summarize those two big findings for us? Miguel Lopez-Ramirez: Yeah, so, basically by doing these studies we use trangenic animal in this case that they give us the visibility to obtain the transcripts in the astrocytes. And basically this is very important because we don't need to isolate the cells, we don't need to manipulate anything, we just took all the ribosomes that were basically capturing the mRNAs and we profile those RNAs that are specifically expressed in the astrocytes. By doing this, we actually went into looking at in depth the transcripts that were altered in the animals that developed the disease, in this case the cerebral cavernous malformation disease, and what we look at is multiple genes that were changing. Many of them were already described in our previous work, which were associated with hypoxia and angiogenesis. But what we found in this work is that now there were a lot of genes associated with inflammation and coagulation actually, which were not identified before. What we notice is that now these astrocytes, during the initial phase of the vascular malformation, may play a more important role in angiogenesis or the degradation of the vessels. Later during the stage of the malformation, they play a more important role in the thrombosis, in the inflammation, and recruitment of leukocyte That was a great advantage in this work by using this approach and looking in detail, these astrocytes. Also, we identified there were very important signature in these astrocytes that we refer as a reactive astrocytes with neuroinflammatory properties. In the same animals, basically, not in the same animal, but in the same basically the experimental approach, we isolated brain vasculature. And by doing the same, we actually identified not only the astrocyte but also the endothelium was quite a different pattern that we were not seeing before. And this pattern was also associated with inflammation, hypoxia and coagulation pathways. That lead us to go into more detail of what was relevant in this vascular malformations. And one additional part that in the paper this is novel and very impactful, is that we identify inflammasome as a one important component, and particularly in those lesions that are multi-cavernous. Now we have two different approaches. One, we see this temporality in which the lesions forms different patterns in which the initial phase maybe is more aneugenic, but as they become more progressive in chronocytes, inflammation and hypoxy pathways are more relevant for the recruitment of the inflammatory cells and also the precipitation of immunothrombosis. But also what we notice is that inflammasome in endothelial and in the leukocytes may play an important role in the multi-cavernous formation, and that's something that we are looking in more detail, if therapeutics or also interventions in these pathways could ameliorate the transition of phases between single lesions into a more aggressive lesions. Cindy St. Hilaire: That's kind of one of the follow up questions I was thinking about too is, from looking at the data that you have, obviously to get a CCM, there's a physical issue in the vessel, right? It's not formed properly. Does that form influence the activation of the astrocyte, and then the astrocytes, I guess, secrete inflammatory factors, target more inflammation in the vessel? Or is there something coming from the CCM initially that's then activating the astrocyte? It's kind of a chicken and the egg question, but do you have a sense of secondary to the malformation, what is the initial trigger? Miguel Lopez-Ramirez: The malformations in our model, and this is important in our model, definitely start by producing changes in the brain endothelial. And as you mention it, these endothelium start secreting molecules that actually directly affect the neighboring cells. One of the first neighboring cells that at least we have identified to be affected is the astrocytes, but clearly could be also pericytes or other cells that are in the neurovascular unit or form part of the neurovascular unit. But what we have seen now is that this interaction gets extended into more robust interactions that what you were referring as the CaLM interactions. Definitely I think during the vascular malformations maybe is the discommunication that we identify already few of those very strong iteration that is part of the follow up manuscript that we have. But also it could be the blood brain barrier breakdown and other changes in the endothelium could also trigger the activation of the astrocytes and brain cells. Cindy St. Hilaire: What does your data suggest about potential future therapies of CCM? I know you have a really intriguing statement or data that showed targeting NF-kappa B isn't likely going to be a good therapeutic strategy. So maybe tell us just a little bit about that, but also, what does that imply, perhaps, of what a therapeutic strategy could be? Bliss Nelson: Originally we did think that the inhibition of NF-kappa B would cause an improvement potentially downstream of the CCMs. And unexpectedly, to our surprise, the partial or total loss of the brain endothelial NF-kappa B activity in the chronic model of the mice, it didn't prevent or cause any improvement in the lesion genesis or neuroinflammation, but instead it resulted in a trend to increase the number of lesions and immunothrombosis, suggesting that the inhibition of it is actually worsening the disease and shouldn't be used as a target for therapeutical approaches. Miguel Lopez-Ramirez: Yes, particularly that's also part of the work that we have ongoing in which NF-kappa B may also play a role in preventing the further increase of inflammation. So that is something that it can also be very important. And this is very particular for certain cell types. It's very little known what the NF-kappa B actually is doing in the brain endothelial during malformations or inflammation per se. So now it's telling us that this is something that we have to consider for the future. Also, our future therapeutics of what we propose are two main therapeutic targets. One is the harmful hypoxia pathway, which involves activation, again, of the population pathway inflammation, but also the inflammasomes. So these two venues are part of our ongoing work in trying to see if we have a way to target with a more safe and basically efficient way this inflammation. However, knowing the mechanisms of how these neuroinflammation take place is what is the key for understanding the disease. And maybe even that inflammatory and inflammatory compounds may not be the direct therapeutic approach, but by understanding these mechanisms, we may come with new approaches that will help for safe and effective therapies. Cindy St. Hilaire: What was the most challenging part of this study? I'm going to guess it has something to do with the mice, but in terms of collecting the data or figure out what's going on, what was the most challenging? Bliss Nelson: To this, I'd like to say that I think our team is very strong. We work very well together, so I think even the most challenging part of completing this paper wasn't so challenging because we have a really strong support system among ourselves, with Miguel as a great mentor. And then there's also two postdocs in the lab who are also first authors that contributed a lot to it. Cindy St. Hilaire: Great. Well, I just want to commend both of you on an amazing, beautiful story. I loved a lot of the imaging in it, really well done, very technically challenging, I think, pulling out these specific sets of cells and investigating what's happening in them. Really well done study. And Bliss, as an undergraduate student, quite an impressive amount of work. And I congratulate both you and your team on such a wonderful story. Bliss Nelson: Thank you very much. Miguel Lopez-Ramirez: Thank you for Bliss and also Elios and Edo and Katrine, who all contributed enormously to the completion of this project. Cindy St. Hilaire: It always takes a team. Miguel Lopez-Ramirez: Yes. Cindy St. Hilaire: Great. Well, thank you so much, and I can't wait to see what's next for this story. Cindy St. Hilaire: That's it for the highlights from October 28th and November 11th issues of Circulation Research. Thank you so much for listening. Please check out the Circ Res Facebook page and follow us on Twitter and Instagram with the handle @circres and #discovercircres. Thank you to our guests, Dr Miguel Lopez-Ramirez and Bliss Nelson. This podcast is produced by Ashara Retniyaka, edited by Melissa Stoner, and supported by the editorial team of Circulation Research. Some of the copy text for our highlighted articles is provided by Ruth Williams. I'm your host, Dr Cindy St. Hilaire, and this is Discover CircRes, you're on the go source for the most exciting discoveries in basic cardiovascular research. This program is copyright of the American Heart Association 2022. The opinions expressed by speakers in this podcast are their own and not necessarily those of the editors or of the American Heart Association. For more information, please visit ahagenerals.org.
00:00 Intro y encuesta 02:04 Galaxy A52 recibe One UI 5 Beta 3 03:24 HUAWEI quita el nombre de Leica de los P50 04:51 WhatsApp permite hacer links para llamadas a lo Zoom 05:53 Xiaomi Civi 2 es oficial y lanzado en China 10:18 HUAWEI Mate 50 Pro es lanzado en Europa
TP and Liz look at today's earnings. They discuss the FOMC and how the markets could move after the announcement. They pinky swear to trade a META put at 2pm. TP Also discusses the importance of the P50 number on the platform.
TP and Liz look at today's earnings. They discuss the FOMC and how the markets could move after the announcement. They pinky swear to trade a META put at 2pm. TP Also discusses the importance of the P50 number on the platform.
On today's show Dr. Jim joins Errol to help him put a trade on Tesla earnings. Dr. Jim warns Errol about trade entry as they look at P50 and what that means for the trade. With Defined risk strategies there's not as much flexibility so trade entry is very important.
On today's show Dr. Jim joins Errol to help him put a trade on Tesla earnings. Dr. Jim warns Errol about trade entry as they look at P50 and what that means for the trade. With Defined risk strategies there's not as much flexibility so trade entry is very important.
Eytan Uliel, CEO of Challenger Energy #CEG explains the plans for their Uruguay Licence after final approvals by decree of the President, progress at other parts of the business and recent significant shareholders announcements. Challenger Energy (AIM: CEG), the Caribbean and Atlantic-margin focused oil and gas company, with oil production, appraisal, development and exploration assets across the region, provides the following update in relation to the AREA OFF-1 petroleum licence offshore Uruguay. · The AREA OFF-1 licence was awarded to the Company in May of 2020. Since that time there has been an extended period on non-activity, largely as a result of the Covid-19 pandemic, during which formal signature of the licence was pending. · Following final approvals being granted by decree of the President of Uruguay, the AREA OFF-1 licence was formally signed on 25 May 2022. Consequently, the first 4-year exploration period under the licence has commenced. · The Company's minimum work obligation during this initial period is to undertake relatively modest, low-cost reprocessing and reinterpretation of selected historical 2D seismic data. There is no drilling obligation in the initial phase. The Company's work program and budget for the balance of 2022 and into 2023 includes sufficient allocation of funds to progress the agree minimum work obligation on AREA OFF-1. · AREA OFF-1 contains a management estimated resource potential exceeding 1 billion barrels of oil equivalent recoverable (BBOE), based on current mapping from multiple exploration plays and leads in relatively shallow waters, and with significant upside running room. This estimate is corroborated by formal resource estimates provided by ANCAP (the Uruguayan national oil company) of 1.36 BBOE as a P50 expected ultimate recoverable resource. · The AREA OFF-1 play system is directly analogous to the recent prolific, conjugate margin discoveries made offshore Namibia by Total (the Venus well) and Shell (the Graff well), where reported multi-billion-barrel Cretaceous turbidite reservoirs have been encountered. The AREA OFF-1 licence exhibits the same Aptian play source rock and petroleum systems being present. · The Company has received multiple indications of interest in relation to potential partnerships for the AREA OFF-1 licence. The Company intends to explore such possibilities, with a view to potentially expediting a 3D seismic acquisition into the first licence exploration period. Further updates will be provided as and when appropriate.
Podcast ONE: 20 de mayo de 2022 Podcast ONE. Gigabyte AORUS Project Stealth, Huawei Watch GT 3 PRO y P50 primeras impresiones, Rolling Gunner, OVNIs o «fenómeno aéreo no identificado», Revisión Cotton Fantasy, Adiós Vangelis, Nokia FastMile 5G-FWA, Entrevista a Gui Cunha Director de Distribución maxiaNET, Día de Internet 2022, Hot Sale, MasterCard pagos con […] La entrada Podcast ONE: 20 de mayo de 2022 se publicó primero en OneDigital.
华为推出新一代全屋智能解决方案,还有新款P50手机、平板、手表……
Abbiamo avuto l'onore e il piacere di fare una chiacchierata con Luca Locatelli sulla fotografia di oggi e del futuro. Luca, autore del progetto "Back In Motion" con Huawei P50 Pro, è un fotoreporter italiano che collabora e ha collaborato con National Geographic, The New York Times Magazine, Time, Bloomberg e molti altri. Ha vinto il premio Leica Oskar Barnack Award 2020 e due World Press Photo. Chi è Luca? Come fotografa? Che consigli può offrire? Scopriamolo.
Charlamos sobre lo que estamos viendo, leyendo y escuchando además de los dos análisis de la semana: Pixel 6 y Huawei P50 Pro.- https://computerhoy.com/analisis/tecnologia/huawei-p50-pocket-analisis-opinion-1007343- https://computerhoy.com/analisis/google-pixel-6-review-opinion-1003971
On commence avec le décès James Bidgoog ensuite on part chez Huawei qui annonce le P50 pro et pocket, un photographe qui photographie des sosies depuis 20 ans et le Agfa Photo pour enfant bon marché
Repasamos el análisis al terminal más top de Huawei hasta la fecha, el Huawei P50 Pro y dos accesorios: realme watch 2 Lite y unos auriculares TWS, los SoundCore Liberty 3.- https://computerhoy.com/analisis/huawei-p50-pro-review-opinion-1000373- https://computerhoy.com/analisis/soundcore-liberty-3-pro-review-opinion-1000377- https://computerhoy.com/analisis/tecnologia/redmi-watch-2-lite-review-opinion-1001825
Chaque semaine, De Quoi Je Me Mail ouvre le débat sur l'actu high-tech ! En compagnie de journalistes, mais aussi de personnalités spécialistes du numérique, nous analysons, décortiquons les grandes tendances du moment. Ce vendredi, nous commentons l'actu avec Anthony Morel journaliste à BFM Business et Gonzague Dambricourt, entrepreneur et blogueur. Module 1 : Neuralink : Elon Musk veut connecter nos cerveaux DQJMM (1/2) - Elon Musk voudrait tester Neuralink, son implant cérébral sur l'homme en 2022 - Facebook inaugure son supercalculateur et trace sa route vers le metavers - Amazon va ouvrir un magasin de vêtements bourré de technologies - Huawei sort sa nouvelle série de smartphones P50, sans 5G et sans Google Module 2 : les grandes dates du spatial en 2022 DQJMM (2/2) François Sorel reçoit Antoine Meunier, rédacteur en chef de la Chronique Spatiale. Quelles sont les grandes dates de l'exploration spatiale en 2022 ? On fait le point.
澎湃消息,针对派息式减持京东,腾讯回应称,“投资发展期的成长型企业”一直是腾讯投资的主要战略方向,而当被投企业有持续自筹资金能力时,则选择在适当情况下退出投资并与股东分享收益,腾讯的“长期投资”战略从未改变。同时腾讯表示,分配后腾讯仍是京东的战略合作伙伴,并对京东的前景依然充满信心,与京东共赢的业务关系亦不受影响,并且目前公司没有进一步减持京东的计划。据报道,知乎回应“腾讯搜狗退出知乎股东”称,北京智者天下科技有限公司是知乎境外上市主体ZHIHU INC.的境内关联公司,本次架构变动是知乎公司治理的正常变动,也是中概股公司上市之后的标准操作,知乎的实际控制人与重要股东及其持股比例均没有变化。36氪获悉,华为正式发布折叠屏新机P50 Pocket,这是华为首款纵向折叠屏手机。据介绍,华为P50宝盒采用万象双环设计,为上下叠放的镜头模组及圆形外屏。拥有21:9的屏幕比例、2790x1188分辨率、120hz刷新率及P3全局色彩管理。P50宝盒重量为190g,厚度为7.2mm,宽度为75.5mm。澎湃消息,12月22日从北京、上海及广州等多家茅台专卖店了解到,目前53°500ml的飞天茅台酒散瓶市场价在2800元左右,而原箱酒市场价多在3400元/瓶,拆箱整箱则价格较低,约为3200元。同日,茅台镇所在仁怀市,以及上海、南京、广东等多地茅台官方经销商表示,目前飞天茅台的出厂价及市场指导价均没有变动,同时表示尚未收到集团的调价通知。据央视新闻报道,北京冬奥组委专职副主席、秘书长韩子荣介绍,所有涉奥人员需要在来华至少14天前完成全程疫苗接种,才可以免除集中隔离,进入闭环管理。鉴于当前全球疫情形势的复杂性,我们特别建议所有涉奥人员能够接种新冠疫苗加强针。闭环内来华涉奥人员和国内直接服务外方的工作人员需要每日进行健康监测和核酸检测,只允许乘坐冬奥专用车辆往返指定场所,不得与闭环外人员接触,更不得与社会面接触。证券时报消息,国家税务总局四川省税务局、河北省税务局、山东省税务局、湖南省税务局、黑龙江省税务局等昨天发布通告称,为进一步营造依法诚信纳税的良好环境,请此前尚未关注自身涉税问题或自查整改不到位的明星艺人、网络主播等,抓紧对照税法及有关通知要求进行自查,并于2021年底前向税务部门主动报告和纠正涉税问题,税务部门将依通知要求从轻、减轻或者免予税务处罚。对仍拒不自查自纠或者自查自纠不彻底的,税务部门将依法依规严肃处理。市场人士透露,星巴克原定与饿了么的3年独家合作将于今年12月31日到期。目前星巴克在中国市场正在寻求新的合作伙伴。新合作伙伴包括顺丰、美团、山姆会员店等,这几家都有配送能力,且都在与星巴克商谈合作。业内认为,星巴克选择开放战略,可能与互联网行业的反垄断趋势,尤其是“反二选一”趋势关系密切。在“独家合作”遭到否定之后,与单一平台绑定,已不再是优先考虑目标。
113 new - now 936 cases, 9 in Nobles, 0 in ICU, trial of Joseph Marshall Day 2, PCR test efficacy, UK election votes, South Quay 20mph & P50 marathon. It's Update with Andy Wint #iom #manxradio #news
今日聚焦【华为调侃在美所售手机享100%折扣】美国圣诞节大采购一般从感恩节的第二天开始,商场会推出大量的打折和优惠活动。而华为美国通过官方 Twitter“宣布”了“黑色星期五”特价,并调侃称“我们所有美国在售手机享 100%折扣”。华为官方随后表示,这只是个玩笑,他们不能在美国卖任何东西。不仅如此,余承东今年7月推出 P50系列手机时曾表示,在美国的四轮制裁下,限制华为的5G手机,导致5G芯片只能当4G用。【杭州上调新房限价,个别板块涨了2000元/平方米】杭州市规划和自然资源局正式挂出了第三批集中出让土地,印证了外界的猜测——确实有部分地块价格下降。但更受到关注的则是杭州的原有新房限价被打破,多个板块限价普涨。此轮地块公告后,杭州至少有13个板块的新房限价上涨了。涨幅最高的板块,每平米涨了2000元。业内人士预计,此轮集中供地形成的新价格体系,或将重塑杭州楼市。企业动态【大众中国CEO冯思翰回应离职传闻】大众汽车集团(中国)CEO冯思翰对离任传闻做出回应,称近期内不会离开中国。冯思翰表示:“首先可以确定的是,管理层会发生调整,包括我在中国的职位,这个原因也是非常清晰和显而易见的。”他表示,长时间在中国任职为其带来诸多优势,例如对合资企业伙伴及政府主管部门更为熟知。同时,大众汽车集团也保持着良好的经营原则和企业传统:当一个管理者在一个工作岗位或一个地方任职多年之后,需要做出调整和变动。【新东方在线股价10天翻倍】11月22日,港股新东方在线再度大涨23%,股价报收于8.49港元/股。近10个交易日,新东方在线累计涨幅达到106%,最新市值为85亿港元。这或与俞敏洪连续增持、直播带货转型农业及多家投资机构在第三季度加仓教育股有关。【董明珠称以后咱就是双休人啦】11月22日,格力正式实行双休。通知称,将单双休工作制调整为双休工作制,取消所有加班,对于确实有加班需要的,需要主管公司领导批准。消息公布后,董明珠通过官微转发称:“以后咱就是双休人啦”。【网易云音乐将于12月2日挂牌上市】11月23日,网易云音乐宣布启动香港首次公开发行。公告显示,网易云音乐将在全球公开发行1600万股普通股,发售价区间定为每股190-220港元,拟募资30.4-35.2亿港元。按照计划,网易云音乐将于12月2日正式挂牌上市,代码为“9899.HK” 。【拼多多支付商标注册成功】11月23日,拼多多关联公司上海寻梦信息技术有限公司申请的多个“拼多多支付”商标显示注册成功。据悉,“拼多多支付”相关商标最早申请于2021年1月14日,国际分类涉及网站服务、教育娱乐、通讯服务、科学仪器等。不过目前金融物管、广告销售等部分相关商标状态为等待实质审查或驳回复审中。【陶虹退出张庭夫妇传媒公司股东】近日,上海淘不庭文化传媒有限公司发生多项工商变更,经营范围新增电视剧发行;陶虹退出该公司股东。上海淘不庭文化传媒有限公司成立于2020年7月,注册资本5000万人民币。值得一提的事,今年7月,该公司名称由“陶不庭文化传媒”变更为“淘不庭文化传媒”。张淑琴、林吉荣(张庭夫妇)为该公司高级管理人员。【中国内地首家沃尔玛月底撤场】作为深圳乃至中国大陆第一家沃尔玛超市,洪湖沃尔玛陪伴深圳人整整25年,今年也是沃尔玛在中国的25周年,如今这家超市将于月底结业。有附近居民告诉南都记者,洪湖沃尔玛已经陪伴自己几十年,听说要撤场后深感不舍,“撤场带来的影响主要是心理上的,陪伴了这么久,现在要撤场了,心里很舍不得。”产业纵深【2025年年底,河南将全面取消天然气暖气初装费】11月22日,河南省政府办公厅方案明确:市、县级政府授权供水供电供气供热企业以入网费、集中管网建设费、并网配套费等名目收取专项建设费用补偿收入的,2025年年底前全部予以取消。2021年3月1日后新建项目涉及的上述费用统一纳入土地开发支出或房屋开发建设成本,不得向用户另外收取。【零死亡!我国新冠特效药有望12月底前附条件上市】由清华大学、深圳市第三人民医院和腾盛博药合作研发的新冠药物BRII-196和BRII-198联合用药临床Ⅲ期已揭盲,给药组在治疗28天后实现零死亡,对照组8例死亡,详细结果会在近期对外公布。这也是目前我国进展最快的抗体药物,有望12月底前获得批准附条件上市。与欧美已获批紧急使用的新冠抗体药相比,该药是唯一进行了变异株感染者治疗效果评估并获得数据的。【813.59吨!黄金消费市场提前进入旺季】中国黄金协会统计数据显示,2021年前三季度,全国黄金实际消费量813.59吨,与2020年同期相比增长48.44%,较疫情前2019年同期增长5.89%。其中黄金首饰529.06吨,较2020年同期增长54.21%,较2019年同期增长1.11%。业内人士表示,随着元旦、春节黄金消费旺季的临近,黄金消费的热潮或将进一步升温。【中国人均GDP第一城是克拉玛依】中国人均GDP排名第一的城市,不是北上广深,也压根不在东部沿海,而是位于北疆古尔班通古特沙漠边缘的克拉玛依,一座因石油而生的城市。2020年,克拉玛依的GDP总量高达887亿元,人均GDP达到18万元,是上海的1.2倍、武汉的1.4倍、成都的双倍。 【深圳市监局回应千元饮品事件:已启动核查工】近日,千元一杯橄榄汁引热议,深圳市场监管回应称,已快速启动现场核查工作,相关情况将持续通报。搜索发现,该饮品店内商品价格不等,除了千元橄榄汁外,还有108元的山竹汁、榴莲汁等,最便宜的也有28元的茶饮。国际视野【爱因斯坦相对论手稿估价300万欧元】当地时间23日,理论物理学家阿尔伯特·爱因斯坦的一份罕见手稿将在法国巴黎拍卖。这份手稿包含爱因斯坦的关键成就——相对论的准备工作,被称为有史以来拍卖的最有价值的爱因斯坦手稿,预估拍价在200万至300万欧元(约合人民币1435万至2150万)。【苹果回应被意大利重罚:将提出上诉】11月23日消息,经过16个月的调查,意大利反垄断局对亚马逊罚款6870万欧元,对苹果罚款1.345亿欧元。意大利反垄断局表示,这两家公司签署的协议损害了消费者和一些卖家的利益,违背了自由竞争、意大利法律和欧洲标准。
Na Ingrata Resenha de hoje falaremos sobre o huawei P50 pro e suas poderosas câmeras! Tudo isso e muito mais com convidados, irreverência e a groselha de sempre. Inscreva-se no canal, acompanhe nosso podcast nos maiores serviços de streaming e também no Instagram e Twitter. site: www.osingratos.com.br e-mail: podcastosingratos@gmail.com Faça um pix e ajude o canal: pix@osingratos.com.br Compre sua camisa e ajude o canal em: www.osingratos.com.br/loja Se for assinante amazon prime não se esqueça de passar em nosso canal da twitch e se increver de forma gratuita com o amazon prime, desde já agradecemos! Canal na Twitch: https://www.twitch.tv/osingratospodcast Grupo Telegram: https://t.me/joinchat/RUG70E3DQhuav2w7 Canal Conectado:https://www.youtube.com/c/Transitando7 Canal Rodrigo Portella (geek loko): https://youtube.com/c/RodrigoPortellatec Canal Tech Place Brasil: https://www.youtube.com/TechPlaceBrasil Canal Pablo Tech Tips: https://youtube.com/pablotechtips Canal Mobgrafando: https://www.youtube.com/c/Mobgrafando Canal Ver Tech: https://www.youtube.com/channel/UCV4-RTB1XQHbhxbbdArjC8w Canal DicioTech: https://www.youtube.com/diciotech Canal Haroldo Colares: https://www.youtube.com/HaroldoColares Canal GTonlive: https://www.youtube.com/gtonlive #ingrataresenha #osingratos #podcast #huawei #p50 --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/OsIngratos/support
Despite saying that the pandemic response is a priority, the executive department of the Duterte administration slashed the P50.4 billion allotted for healthcare workers' allowances and other benefits from the proposed 2022 budget. "It's so important for us to know exactly what the government wants to do — and what it really wants to do is reflected in the national budget. It's not really in the public statements that officials make day in and day out during their press conferences," said Zy-za Nadine Suzara, executive director of think tank Institute for Leadership, Empowerment, and Democracy (iLead). In this B-Side episode, Ms. Suzara tells BusinessWorld reporter Kyle Aristophere T. Atienza why broader civil society should pay attention to the ongoing budget deliberations. Recorded remotely on Sept. 11. Produced by Paolo L. Lopez and Sam L. Marcelo.
Here it is! Our interviews with WWE superstars Ilja Dragunov and Riddle, two champions in their own right, exclusively presented to you by Minute Burger! Minute Burger is here to save you from tapping out from an empty stomach. Fix yourself a snack and order all-time faves like their Bacon Cheeseburger & Black Pepper Burger via their website at www.mbgo.ph! Get the kitchen table ready because MB is giving a max of P50 discount on all BIG TIME burgers if you purchase through the website. End and DELETE your burger cravings. Order online and pick up conveniently. Only with MB Go. Follow Minute Burger on social media: Facebook: facebook.com/minuteburger Instagram: instagram.com/minuteburger TikTok: tiktok.com/@minuteburgerph Promo codes and affiliate links: http://linktr.ee/wrestlingwrestlingpodcast DISCLAIMER: The views and opinions expressed by the podcast creators, hosts, and guests do not necessarily reflect the official policy and position of Podcast Network Asia. Any content provided by the people on the podcast are of their own opinion, and are not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. --- Send in a voice message: https://anchor.fm/wrestling-wrestling/message
Tras algunos meses de espera, Huawei presentó de manera oficial el P50 que llegó con otros productos más.
At tastytrade, we are high probability traders at our core. Regardless of the specific stock or given strategy, we always opt to put ourselves in a position [where we are successful more often than not. As a result, we lean heavily on both POP (probability of profit) and P50 (probability of making 50% of max profit). And as we see with our NFLX position on the show today, while these two metrics both measure probability, they do so in very different ways.
At tastytrade, we are high probability traders at our core. Regardless of the specific stock or given strategy, we always opt to put ourselves in a position [where we are successful more often than not. As a result, we lean heavily on both POP (probability of profit) and P50 (probability of making 50% of max profit). And as we see with our NFLX position on the show today, while these two metrics both measure probability, they do so in very different ways.
This week Gavin and Lindsey delay the folding phone talk as long as possible on South Africa's most accessible consumer technology podcast. Is the Honor Magic 3 what the P50 should've been? Is Samsung trying to create a market to license its folding intellectual property? Will Telkom take Joox to the heights that it should be hitting? Get all the answers to those questions, get the full story about why Lindsey sold the Samsung Galaxy S21 Ultra, and see how long until the first Apple reference - it's actually a new record for the podcast. Gavin's main hustle is here: http://techmagazine.co.za/ And you can also find us here: http://thatopinionguy.co.za https://global.techradar.com/en-za Follow Gavin at https://facebook.com/TechMagazineZA/ Find Lindsey at https://twitter.com/SharpSchutters. Email us at overclocked@gmail.com Recorded on LG Velvet Produced by Lindsey Schutters Music by Lindsey Schutters
Apple je otvorio ozbiljnu temu. Malo je reći da je novi sistem za borbu protiv dečije pornografije, koji će Apple uvesti na svojim uređajima, izazvao burne reakcije na mreži, pre svega zbog toga što se radi o detekciji sadržaja na samom uređaju. U ovoj epizodi trudimo se da pre svega pojasnimo i objasnimo kako će uopšte funkcionisati, kako bi razgovor i zaključci na ovu temu uopšte imali smisla. Imajući u vidu važnost ove teme, nadamo se da smo doprineli boljem razumevanju, a stojimo na raspolaganju i apelujemo da nam svim dostupnim kanalima uputite sva pitanja kako bi pokušali da razjasnimo sve što je u našoj moći. U ostatku zbivanja izdvajamo prve informacije o Pixelu 6, ali i Googleovom Tensor čipu koji će ga pokretati. Huaweijeva novost za ovaj deo sezone zove se P50, a kao tema nije izostala iz 128. epizode Tehnopolis podcasta. Hvala na slušanju! Pratite Tehnopolis podcast RSS: https://www.b92.net/podcast/tehnopolis/feed/ Apple Podcasts: https://podcasts.apple.com/podcast/tehnopolis/id1185520336?mt=2 Google Podcasts: https://bit.ly/32Y1mlO Aleksandar Miladinović https://twitter.com/alexmiladinovic Ivan Jelić https://twitter.com/escapetofreedom https://mastodon.social/@escapetofreedom
Pixel 6 Preview: Here's What Google's First Smartphone Chip Can Do This is The Pixel 6, Google's Take On An 'Ultra High End' Phone Google's Pixel 6 phones are coming with a chip designed in-house The Pixel 6 camera will try to magically unblur people's faces in your photos Google is aiming for iPhone-like video quality with the Pixel 6's camera Google Is Finally Killing Off a Decade-Old Version of Android Google begins showing what its new Play Store safety listings will look like Huawei's P50 was announced with Snapdragon 888 and HarmonyOS You can now pre-order Qualcomm's crazy Snapdragon Insider phone for a cool $1,500 The OnePlus 9T has reportedly been canceled A Cheaper YouTube Premium May Be Coming Soon Google Reader is returning from the dead and haunting Google Chrome Rec Room begins its early rollout on select Android devices Wearable recommendations for swimming. - Why do you think Apple Watch is better than the others? Expanding the Google TV with Chromecast hardware Read our show notes here: https://bit.ly/3AbkpYe Hosts: Jason Howell, Florence Ion, and Ron Richards Guest: Russell Holly Subscribe to All About Android at https://twit.tv/shows/all-about-android. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: udacity.com/TWiT offer code TWIT75
Pixel 6 Preview: Here's What Google's First Smartphone Chip Can Do This is The Pixel 6, Google's Take On An 'Ultra High End' Phone Google's Pixel 6 phones are coming with a chip designed in-house The Pixel 6 camera will try to magically unblur people's faces in your photos Google is aiming for iPhone-like video quality with the Pixel 6's camera Google Is Finally Killing Off a Decade-Old Version of Android Google begins showing what its new Play Store safety listings will look like Huawei's P50 was announced with Snapdragon 888 and HarmonyOS You can now pre-order Qualcomm's crazy Snapdragon Insider phone for a cool $1,500 The OnePlus 9T has reportedly been canceled A Cheaper YouTube Premium May Be Coming Soon Google Reader is returning from the dead and haunting Google Chrome Rec Room begins its early rollout on select Android devices Wearable recommendations for swimming. - Why do you think Apple Watch is better than the others? Expanding the Google TV with Chromecast hardware Read our show notes here: https://bit.ly/3AbkpYe Hosts: Jason Howell, Florence Ion, and Ron Richards Guest: Russell Holly Subscribe to All About Android at https://twit.tv/shows/all-about-android. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: udacity.com/TWiT offer code TWIT75
Pixel 6 Preview: Here's What Google's First Smartphone Chip Can Do This is The Pixel 6, Google's Take On An 'Ultra High End' Phone Google's Pixel 6 phones are coming with a chip designed in-house The Pixel 6 camera will try to magically unblur people's faces in your photos Google is aiming for iPhone-like video quality with the Pixel 6's camera Google Is Finally Killing Off a Decade-Old Version of Android Google begins showing what its new Play Store safety listings will look like Huawei's P50 was announced with Snapdragon 888 and HarmonyOS You can now pre-order Qualcomm's crazy Snapdragon Insider phone for a cool $1,500 The OnePlus 9T has reportedly been canceled A Cheaper YouTube Premium May Be Coming Soon Google Reader is returning from the dead and haunting Google Chrome Rec Room begins its early rollout on select Android devices Wearable recommendations for swimming. - Why do you think Apple Watch is better than the others? Expanding the Google TV with Chromecast hardware Read our show notes here: https://bit.ly/3AbkpYe Hosts: Jason Howell, Florence Ion, and Ron Richards Guest: Russell Holly Subscribe to All About Android at https://twit.tv/shows/all-about-android. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: udacity.com/TWiT offer code TWIT75
嘉宾:彭林、森森、阿天新闻环节:· 0:00:56 - OPPO 发布柯南联名系列新品· 0:14:11 - 英特尔公布制程计划及新的处理器命名方式· 0:23:40 - 三星确认近期将不会推出 Galaxy Note 新机· 0:26:42 - 华为 P50 系列正式发布· 1:17:00 - 《微软模拟飞行》正式登陆 Xbox Series X|S· 1:21:22 - Sony PS5 最新测试固件支持 M.2 SSD评论问题:· 1:24:13 - BraveSCV:想入手今年10月份可能会更新的Mac mini,显示器一开始想选择LG的那个ultrafine 4k,但是这个显示器现在好像不是很好买,而且我估计还会一起买个Xbox,这个显示器可以做到比较好的连接Xbox吗?如果不是很行的话,可以推荐同价位(六千以内)的素质差不多的替代品吗?祝爱否出片越来越快,质量越来越好。· 1:27:55 - 快到碗L1来:最近毕业找工作,想彭总聊下对招人、面试的看法,以前在大公司的时候跟现在在爱否有什么不同。· 1:40:19 - ChimerAvoid:最近忽然逛到个玩意儿 紫外线消毒盒 想问这种产品针对手机 无线耳机什么的电子产品有损害吗 还有(牙刷头 钥匙这些的应该都没问题吧?) 谢谢· 1:33:00 - 弹幕互动
嘉宾:彭林、森森、阿天新闻环节:· 0:00:56 - OPPO 发布柯南联名系列新品· 0:14:11 - 英特尔公布制程计划及新的处理器命名方式· 0:23:40 - 三星确认近期将不会推出 Galaxy Note 新机· 0:26:42 - 华为 P50 系列正式发布· 1:17:00 - 《微软模拟飞行》正式登陆 Xbox Series X|S· 1:21:22 - Sony PS5 最新测试固件支持 M.2 SSD评论问题:· 1:24:13 - BraveSCV:想入手今年10月份可能会更新的Mac mini,显示器一开始想选择LG的那个ultrafine 4k,但是这个显示器现在好像不是很好买,而且我估计还会一起买个Xbox,这个显示器可以做到比较好的连接Xbox吗?如果不是很行的话,可以推荐同价位(六千以内)的素质差不多的替代品吗?祝爱否出片越来越快,质量越来越好。· 1:27:55 - 快到碗L1来:最近毕业找工作,想彭总聊下对招人、面试的看法,以前在大公司的时候跟现在在爱否有什么不同。· 1:40:19 - ChimerAvoid:最近忽然逛到个玩意儿 紫外线消毒盒 想问这种产品针对手机 无线耳机什么的电子产品有损害吗 还有(牙刷头 钥匙这些的应该都没问题吧?) 谢谢· 1:33:00 - 弹幕互动
今日聚焦【七部门进驻滴滴开展网络安全审查】据网信中国消息,网络安全审查办公室有关负责同志表示,按照网络安全审查工作安排,7月16日,国家网信办会同公安部、国家安全部、自然资源部、交通运输部、税务总局、市场监管总局等部门联合进驻滴滴出行科技有限公司,开展网络安全审查。 【撒钱抢人!网约车补贴大战又来了 】近日,美团打车、曹操出行等多家网约车平台相继推出新用户优惠活动,以吸引司机和乘客注册使用。多位司机和乘客表示,其已开始使用这些平台出行。业内人士表示,曹操出行等平台近期推出优惠活动旨在利用网约车行业空窗期提高市场占有率和用户规模。不过在技术和规模上,这些平台相较于头部网约车平台还有较大劣势。企业动态 【双汇“太子”自述被免职内幕:被保镖摁倒】双汇创始人万隆长子万洪建在朋友圈发文称,6月3日,他在万隆办公室谈到一个高管任职问题,与万隆意见相左,被万隆训斥,万洪建情绪激动,以拳头砸向靠墙的房门,用头撞击玻璃墙柜,宣泄愤懑。万洪建被保镖等人摁倒在地,满头血迹,万隆要求拍照取证。万洪建表示:“万隆打算还要任职5年以上,并没有接班人的计划。我其实在公司没有啥权利,与他无斗争之说,只是在业务上提出了不同看法,这触怒了他。”(贝壳财经)【小米销量跻身全球第二,雷军捐出6.16亿股份】市调机构Canalys发布的第二季度全球智能手机市场份额报告显示,小米全球智能手机市占率达到17%,同比增长83%。 智能手机销量超越了苹果,晋升为全球市场内中国品牌份额第一,全球第二。另外,小米集团公告:董事长雷军将总计约6.16亿B类股份,捐赠给小米基金会和雷军基金会,用于公益项目。【华为P50系列新机7月29日发布】7月16日,从华为内部人士确认,华为P50系列新机将于7月29日发布,或将搭载骁龙888系列芯片。(界面) 【周鸿祎称每台电脑上都有360:全世界黑客的坎】日前,360公司董事长周鸿祎接受采访称,因为每台电脑上都有360,就使得我们成为全世界黑客过不去的坎。如果黑客要在中国干点什么,他就得先找台电脑装上360。现在360已经搜集了全球300亿个黑客攻击样本。【银保监会依法延长天安财险等六家机构接管期限一年】7月16日消息,银保监会依法决定,延长天安财产保险股份有限公司、华夏人寿保险股份有限公司、天安人寿保险股份有限公司、易安财产保险股份有限公司、新时代信托股份有限公司、新华信托股份有限公司六家机构接管期限一年,自2021年7月17日起至2022年7月16日止。【小鹏汇天正式发布第五代飞行器X2】7月16日,小鹏汽车董事长何小鹏表示,小鹏汇天今天正式发布第五代飞行器X2,标志着离更广泛安全使用的飞行汽车又近一步。从此前曝光的资料来看,此次发布的新品应该是旅航者X2。小鹏汇天官网显示,旅航者X2共享P7设计基因,全机身碳纤维材质,全封闭双人座舱。【理想汽车已于5月在港提交上市申请】理想汽车已于5月底向港交所提交了上市申请,以双重上市的方式,其保荐人之一为高盛。一切顺利的话,理想汽车或将在8月底左右在港挂牌。双重上市,指理想汽车除了在美以第一上市地挂牌外,香港也将成为其第一上市地。对于已经在美国上市的理想汽车来说,在港申请上市时则需要按照香港市场的上市规则进行发行上市。(腾讯一线)【小红书暂停美国上市计划】7月16日消息,据外媒报道,小红书将暂停其在美国的上市计划。此前在4月份,路透社旗下IFR曾报道,知情人士表示,小红书计划在今年年中前后在美国进行IPO,筹资约5亿至10亿美元。目前,小红书暂无回应。【腾讯招募芯片研发设计人才 】腾讯招聘官网出现多个芯片研发岗位:芯片架构师、芯片验证工程师、芯片设计工程师等多个职缺,工作地点可选北京、上海、深圳等。对于上述消息,腾讯方面回应称,基于一些业务的需要,他们在特定的领域有一些芯片研发的尝试,比如AI加速和视频编解码,非通用芯片。 【创维集团拟分拆创维电器上市】7月16日,创维集团发布公告称,计划分拆旗下公司创维电器于深交所创业板独立上市。创维电器主要业务包括研发、制造和销售冷藏冷冻存储类产品和洗涤护理类设备,包括冰箱、冰柜、洗衣机、干衣机。【史上最大收购!英特尔或花2000亿收购格芯】7月16日消息,有知情人士透露,半导体巨头英特尔正在考虑以300亿美元(约2000亿人民币)的价格,收购美国晶圆代工厂商格芯。此次收购一旦完成,将会推动英特尔进一步提升芯片代工产能,届时该收购也将成为英特尔历史上最大规模的收购行为。(华尔街日报)产业纵深【全国碳市场正式启动首笔成交价52.78元/吨】今日9点30分,首笔全国碳交易撮合成功,价格为每吨52.78元;从各界最为关注的碳价来看,首日成交价格整体高于试点价格。以往试点市场价格常年在20-80元人民币/吨的区间浮动。生态环境部副部长赵英民也表示,从过去几年的运行情况看,全国七个试点省的加权平均碳价在40元人民币/吨左右。【 多家金融机构因违规合计被罚2.98亿元】7月16日,中国银保监会通报:检查发现,民生银行、浦发银行、交通银行在同业、理财、委托贷款等业务中分别或同时存在资金投向不合规,为房地产市场或地方政府违规提供融资的情形依然存在等问题。对民生银行罚款11450万元,对浦发银行罚款6920万元,对交通银行罚款4100万元,对进出口银行没收违法所得并处罚款合计7345.6万元。合计被罚2.98亿元。【超三分之一央企正制定氢能全产业链布局 】国务院国资委秘书长彭华岗介绍,超过三分之一的中央企业已经在制定包括制氢、储氢、加氢、用氢等全产业链布局,取得了一批技术研发和示范应用成果。彭华岗介绍,对于碳达峰、碳中和行动,国资委高度重视,正在积极研究制定有关意见和方案,推动中央企业更好地在碳达峰、碳中和行动中发挥作用。【北京拟出新规:共有产权住房将可对外出租】北京市住建委牵头起草文件拟规定,共有产权住房租赁活动统一通过市级代持机构建立的网络服务平台进行,平台为购房人提供住房核验、房源发布、合同网签、登记备案等服务。
On the 34th episode of this week in tech, we are discussing some events that happened over the past couple of days. Samsung’s new rumoured animated virtual assistant, Paramount’s new streaming service, recent google search showing a definite answer for racist questions and the latest Huawei cameras are among the events we are discussing on this episode. Episode Timeline 00:42 Zemach FM’s new and better website announcement. 03:50 Ethiotelecom expands 4G LTE service to more parts of Ethiopia. 04:20 Google shows a search result for the query “Ugliest language in Ethiopia” 06:47 Samsung’s new animated virtual assistant 09:50 Airtag get’s a software update after a stalking report. 13:36 Paramount + launch their own streaming service. 18:16 Huawei announces P50 with a new camera design and their own Harmony OS 20:10 Instagram launches a new product explore tab on it’s latest release. Contact the hosts Henok Tsegaye Twitter Instagram LinkedIn Abdulhadmid Oumer Twitter Instagram linkedIn Follow Zemach FM and give us comment
今日聚焦【鸿蒙来了!华为P50上市时间未定】6月2日晚,华为正式发布HarmonyOS2及多款搭载HarmonyOS2的新产品。这也意味着“搭载HarmonyOS(鸿蒙)的手机”已经变成面向市场的正式产品。华为透露,将陆续向华为手机、平板、智慧屏等智能终端设备推送升级HarmonyOS 2,到明年上半年计划实现近百款设备升级HarmonyOS 2。此外,余承东提到,因为众所周知的原因,华为P50系列的上市时间还没有确定。【特斯拉法务部私信警告自媒体】近日“特斯拉法务部”开通了官方微博账号,引发热议。据了解,该账号蓝V审核时间是2021年5月31号。目前,账号尚未发布任何内容。据自媒体“五千年的兔子”反映,早在5月29日,他就收到了当时还没有加V的“特斯拉法务部”的私信。私信内容是:我司注意到您仍在持续发布涉及我司的侵权内容。请知悉,我司已在北京法院提起诉讼。如希望进一步沟通,请通过上述联系方式联系。(三言财经)【银保监会:立案网贷机构明星代言费将被追缴】6月1日,银保监会副主席梁涛表示,对已立案的999家网贷机构,依法协调公安、司法等部门加快审理进度。加快追赃挽损,依法追缴高管奖金和明星代言费、广告费。企业动态【9倍补偿后驴嫂再被指卖假中兴手机】近日,因“驴嫂”卖朵唯问题手机,平台承诺先行9倍补偿消费者后,又有消费者指称驴嫂销售山寨“中兴手机”。中兴品牌方否认生产过涉事手机。驴嫂方面未回应此事。【数百车主质疑理想ONE欺诈销售:要求"退一赔三"】近日,2021款理想ONE车型正式推出,新车在配置方面进行了大幅提升,但较现款仅涨价1万元,且有车主反应理想方面故意隐瞒新车发布消息,欺骗甚至诱导消费者购买老款车型,引发网络争议。新款发布后,他们又收到了理想官方返还的8000元补偿款,但很多老车主表示“不接受”,并称理想汽车进行“虚假宣传”,应“退一赔三”。目前,理想方面尚未对此事作出官方回应。【爱回收赴美上市】据悉,中国二手消费电子产品交易平台万物新生集团(爱回收)已正式向SEC递交IPO招股书,拟于纽交所上市。CIC报告显示,中国二手电子产品交易和服务市场潜力巨大,预计到2025年中国二手电子产品交易数量有望达到5.46亿台次。【美克家居股票涉嫌操纵事件持续发酵】美克家居股票涉嫌操纵事件持续发酵,杨震表示,自己已经用六个月时间准备了充分的线索和证据链条,并在5月24号去证监会递交材料。6月1日,此次涉案的两个当事人杨震和许亚飞均进行了回应。许亚飞称,自己在此事之间只扮演了一个掮客的身份,杨震所举报的美克家居股票操纵系自己酒醉后吹牛所说。对此,杨震表示,“自己平时不喝酒。”【苹果成为2021年美国最赚钱公司】财富于6月2日发布2021年《财富》美国500强排行榜。从盈利能力来看,苹果利润较上年利润上升了3.9%,重回利润榜榜首;而微软利润上涨12.8%,位列第二。科技和金融行业仍是最赚钱的行业,在最赚钱的前十家公司中,有9家来自这两个行业。【男子投资狗狗币翻两百倍却无法套现】2017年,北京的周先生投资10万元钱,以2分钱左右的单价购置了一款名叫“狗狗币”的虚拟货币,准备长线投资。不到4年,这些币暴涨200倍。10万元投资,飙涨到一千多万元,周先生欣喜若狂。可是就在这个时候,币云科技公司的交易网站却关闭了。平台打不开,周先生购买的虚拟货币就无法套现。不管涨了多少倍,都仅仅是一个数字。(北京卫视)【长沙一律所因收费过低被警告处分】近日长沙市律师协会官网通报了一起案例:长沙一律所在代理一起民事案件中,最低可收律师服务费为41万余元,但他仅收5000元。协会认定该律师过分低于律师收费指导标准,构成不正当竞争,对涉事律所和律师严某某给予警告的行业处分。权威声音【教育部要求学校开展性教育】6月1日,教育部召开发布会并发布《未成年人学校保护规定》。文件要求,学校要有针对性地开展青春期教育、性教育,使学生了解生理健康知识,提高防范性侵害、性骚扰的自我保护意识和能力。(教育部)【广州深圳严查“过桥贷”“赎楼贷”业务】广州市地方金融监管局近期通知,要求小额贷款公司不得开展“过桥贷”“赎楼贷”业务,存量尽快压降、结清,不得直接或变相发放住房按揭贷款。此外,深圳约谈融资担保公司,要求各融资担保公司应对照各项监管要求,全面排查经营用途贷款担保业务,重点自查涉及房地产“贷款担保”或“委托贷款”相关业务。【文旅部:推进完善国民休闲和带薪休假制度】6月2日,文化和旅游部公布《“十四五”文化和旅游发展规划》。《规划》中提到,要推进完善国民休闲和带薪休假制度,创新打造一批“小而美”的城市书房、文化驿站,完善低票价、剧场开放日等举措,鼓励各地举办消费季、消费月,鼓励支持青年主播和博主开展对外文化推广。国际视野【四大国际组织呼吁全球筹资500亿美元抗疫】当地时间1日,国际货币基金组织、世界银行、世卫组织以及世贸组织四大国际机构联合呼吁,全球需紧急资助一项新的需要注资500亿美元的路线图,加快公共卫生工具的公平分配、帮助结束疫情,并为真正的全球复苏以及加强卫生安全奠定基础。【美媒:人民币或将在更短时间内成为全球主要储备货币】6月1日,全球最大对冲基金桥水基金创始人瑞·达利欧表示,人民币将在比大多数人预计的更短时间内成为全球主要储备货币。达利欧接受消费者新闻与商业频道采访时说,预计未来几年人民币的作用将越来越大,使用范围将越来越广,更多国际贸易和金融交易将通过人民币结算。
P50:आप कैसे आत्म विश्वास से सफलता पाएं सकते हैं ।How to increase self confidence aap kabhi bhi e Apne atmavishwas Ko Saath Mein Aasman par Lekar Ja sakte hain. Koshish aapko karni hai aur aap Safal bhi Honge ismein Koi shak Nahin. Majbooti ke sath aage padhen aur Jindagi Ki Jung Jeeten.Visit our website www.wondertips777.com