Podcasts about inductor

  • 20PODCASTS
  • 27EPISODES
  • 43mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Oct 16, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about inductor

Latest podcast episodes about inductor

The Art of Value
Elon Musk's Greatest Hits of ‘Puffery': Tesla We Robot Event

The Art of Value

Play Episode Listen Later Oct 16, 2024 30:25


I react to Elon Musk's remarks from the flashy Tesla We, Robot event held in Los Angeles. Elon's presentation can been seen as a kind of greatest hits of his “corporate puffery” from the past few years, with some new products, and adding more to his vision of the future, that he says will soon become a reality. Timestamps: 01:05 Definition of Elon's “corporate puffery” 03:09 Elon's Welcome to the We, Robot Party 05:06 Vision of an exciting future - Unsupervised Full Self-Driving 05:56 Low cost but Increased value with autonomy  06:58 FSD safety - 10x safer than a human 08:31 Low cost of autonomous transport 09:40 Price of buying a cybecab  10:22 Owning a fleet of cybercabs 11:13 Timeline predictions for FSD and cybercab production 14:01 Unsupervised FSD for Hardware 3? 15:41 Camera vision + AI solution of Tesla Full Self-Driving 18:21 Inductor charging and cybercab cleaning  20:00 The future Robovan 21:21 “The future should look like the future” 23:04 Optimus robot progress, future vision, buy price  25:27 “The biggest product ever of any kind” 26:52 The Age of Abundance  27:57 How much of what Elon says is possible? Related videos: The Elon Musk Tesla Cybertruck: Sucks or Success? https://youtu.be/2DO8to-NkTQ Elon Musk: Tesla Full Self Driving Snake Oil Salesman? https://youtu.be/MG1y8_X40Os  JJ's other YouTube channel: Stocks Today With JJ : https://www.youtube.com/@jjstockstoday Become a member of The Art of Value:  https://www.youtube.com/@TheArtofValue/join Recommended book:  Abundance: The Future Is Better Than You Think, by Peter Diamandis (referral link): https://amzn.to/4eQpeeE Disclaimer: I am not a financial adviser and nothing in this content is financial advice. This content is for general education and entertainment purposes only. Do your own analysis and seek professional financial advice before making any investment decision.

The Art of Value
Elon Musk's Greatest Hits of ‘Puffery': Tesla We Robot Event

The Art of Value

Play Episode Listen Later Oct 16, 2024 30:25


I react to Elon Musk's remarks from the flashy Tesla We, Robot event held in Los Angeles. Elon's presentation can been seen as a kind of greatest hits of his “corporate puffery” from the past few years, with some new products, and adding more to his vision of the future, that he says will soon become a reality. Timestamps: 01:05 Definition of Elon's “corporate puffery” 03:09 Elon's Welcome to the We, Robot Party 05:06 Vision of an exciting future - Unsupervised Full Self-Driving 05:56 Low cost but Increased value with autonomy  06:58 FSD safety - 10x safer than a human 08:31 Low cost of autonomous transport 09:40 Price of buying a cybecab  10:22 Owning a fleet of cybercabs 11:13 Timeline predictions for FSD and cybercab production 14:01 Unsupervised FSD for Hardware 3? 15:41 Camera vision + AI solution of Tesla Full Self-Driving 18:21 Inductor charging and cybercab cleaning  20:00 The future Robovan 21:21 “The future should look like the future” 23:04 Optimus robot progress, future vision, buy price  25:27 “The biggest product ever of any kind” 26:52 The Age of Abundance  27:57 How much of what Elon says is possible? Related videos: The Elon Musk Tesla Cybertruck: Sucks or Success? https://youtu.be/2DO8to-NkTQ Elon Musk: Tesla Full Self Driving Snake Oil Salesman? https://youtu.be/MG1y8_X40Os  JJ's other YouTube channel: Stocks Today With JJ : https://www.youtube.com/@jjstockstoday Become a member of The Art of Value:  https://www.youtube.com/@TheArtofValue/join Recommended book:  Abundance: The Future Is Better Than You Think, by Peter Diamandis (referral link): https://amzn.to/4eQpeeE Disclaimer: I am not a financial adviser and nothing in this content is financial advice. This content is for general education and entertainment purposes only. Do your own analysis and seek professional financial advice before making any investment decision.

The Nonlinear Library
AF - A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jul 26, 2024 35:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication, published by johnswentworth on July 26, 2024 on The AI Alignment Forum. A Solomonoff inductor walks into a bar in a foreign land. (Stop me if you've heard this one before.) The bartender, who is also a Solomonoff inductor, asks "What'll it be?". The customer looks around at what the other patrons are having, points to an unfamiliar drink, and says "One of those, please.". The bartender points to a drawing of the same drink on a menu, and says "One of those?". The customer replies "Yes, one of those.". The bartender then delivers a drink, and it matches what the first inductor expected. What's up with that? The puzzle, here, is that the two Solomonoff inductors seemingly agree on a categorization - i.e. which things count as the Unnamed Kind Of Drink, and which things don't, with at least enough agreement that the customer's drink-type matches the customer's expectations. And the two inductors reach that agreement without learning the category from huge amounts of labeled data - one inductor points at an instance, another inductor points at another instance, and then the first inductor gets the kind of drink it expected. Why (and when) are the two inductors able to coordinate on roughly the same categorization? Most existing work on Solomonoff inductors, Kolmogorov complexity, or minimum description length can't say much about this sort of thing. The problem is that the customer/bartender story is all about the internal structure of the minimum description - the (possibly implicit) "categories" which the two inductors use inside of their minimal descriptions in order to compress their raw data. The theory of minimum description length typically treats programs as black boxes, and doesn't attempt to talk about their internal structure. In this post, we'll show one potential way to solve the puzzle - one potential way for two minimum-description-length-based minds to coordinate on a categorization. Main Tool: Natural Latents for Minimum Description Length Fundamental Theorem Here's the main foundational theorem we'll use. (Just the statement for now, more later.) We have a set of n data points (binary strings) {xi}, and a Turing machine TM. Suppose we find some programs/strings Λ,{ϕi},Λ',{ϕ'i} such that: Mediation: (Λ,ϕ1,…,ϕn) is an approximately-shortest string such that (TM(Λ,ϕi) = xi for all i) Redundancy: For all i, (Λ',ϕ'i) is an approximately-shortest string such that TM(Λ',ϕ'i) = xi.[1] Then: the K-complexity of Λ' given Λ,K(Λ'|Λ), is approximately zero - in other words, Λ' is approximately determined by Λ, in a K-complexity sense. (As a preview: later we'll assume that both Λ and Λ' satisfy both conditions, so both K(Λ'|Λ) and K(Λ|Λ') are approximately zero. In that case, Λ and Λ' are "approximately isomorphic" in the sense that either can be computed from the other by a short program. We'll eventually tackle the customer/bartender puzzle from the start of this post by suggesting that Λ and Λ' each encode a summary of things in one category according to one inductor, so the theorem then says that their category summaries are "approximately isomorphic".) The Intuition What does this theorem mean intuitively? Let's start with the first condition: (Λ,ϕ1,…,ϕn) is an approximately-shortest string such that (TM(Λ,ϕi) = xi for all i). Notice that there's a somewhat-trivial way to satisfy that condition: take Λ to be a minimal description of the whole dataset {xi}, take ϕi=i, and then add a little bit of code to Λ to pick out the datapoint at index ϕi[2]. So TM(Λ,ϕi) computes all of {xi} from Λ, then picks out index i. Now, that might not be the only approximately-minimal description (though it does imply that whatever approximately-minimal Λ,ϕ we do use is approximately a minimal description fo...

The Nonlinear Library
LW - A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jul 26, 2024 35:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication, published by johnswentworth on July 26, 2024 on LessWrong. A Solomonoff inductor walks into a bar in a foreign land. (Stop me if you've heard this one before.) The bartender, who is also a Solomonoff inductor, asks "What'll it be?". The customer looks around at what the other patrons are having, points to an unfamiliar drink, and says "One of those, please.". The bartender points to a drawing of the same drink on a menu, and says "One of those?". The customer replies "Yes, one of those.". The bartender then delivers a drink, and it matches what the first inductor expected. What's up with that? The puzzle, here, is that the two Solomonoff inductors seemingly agree on a categorization - i.e. which things count as the Unnamed Kind Of Drink, and which things don't, with at least enough agreement that the customer's drink-type matches the customer's expectations. And the two inductors reach that agreement without learning the category from huge amounts of labeled data - one inductor points at an instance, another inductor points at another instance, and then the first inductor gets the kind of drink it expected. Why (and when) are the two inductors able to coordinate on roughly the same categorization? Most existing work on Solomonoff inductors, Kolmogorov complexity, or minimum description length can't say much about this sort of thing. The problem is that the customer/bartender story is all about the internal structure of the minimum description - the (possibly implicit) "categories" which the two inductors use inside of their minimal descriptions in order to compress their raw data. The theory of minimum description length typically treats programs as black boxes, and doesn't attempt to talk about their internal structure. In this post, we'll show one potential way to solve the puzzle - one potential way for two minimum-description-length-based minds to coordinate on a categorization. Main Tool: Natural Latents for Minimum Description Length Fundamental Theorem Here's the main foundational theorem we'll use. (Just the statement for now, more later.) We have a set of n data points (binary strings) {xi}, and a Turing machine TM. Suppose we find some programs/strings Λ,{ϕi},Λ',{ϕ'i} such that: Mediation: (Λ,ϕ1,…,ϕn) is an approximately-shortest string such that (TM(Λ,ϕi) = xi for all i) Redundancy: For all i, (Λ',ϕ'i) is an approximately-shortest string such that TM(Λ',ϕ'i) = xi.[1] Then: the K-complexity of Λ' given Λ,K(Λ'|Λ), is approximately zero - in other words, Λ' is approximately determined by Λ, in a K-complexity sense. (As a preview: later we'll assume that both Λ and Λ' satisfy both conditions, so both K(Λ'|Λ) and K(Λ|Λ') are approximately zero. In that case, Λ and Λ' are "approximately isomorphic" in the sense that either can be computed from the other by a short program. We'll eventually tackle the customer/bartender puzzle from the start of this post by suggesting that Λ and Λ' each encode a summary of things in one category according to one inductor, so the theorem then says that their category summaries are "approximately isomorphic".) The Intuition What does this theorem mean intuitively? Let's start with the first condition: (Λ,ϕ1,…,ϕn) is an approximately-shortest string such that (TM(Λ,ϕi) = xi for all i). Notice that there's a somewhat-trivial way to satisfy that condition: take Λ to be a minimal description of the whole dataset {xi}, take ϕi=i, and then add a little bit of code to Λ to pick out the datapoint at index ϕi[2]. So TM(Λ,ϕi) computes all of {xi} from Λ, then picks out index i. Now, that might not be the only approximately-minimal description (though it does imply that whatever approximately-minimal Λ,ϕ we do use is approximately a minimal description for all of x). ...

The Nonlinear Library: LessWrong
LW - A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication by johnswentworth

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 26, 2024 35:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Solomonoff Inductor Walks Into a Bar: Schelling Points for Communication, published by johnswentworth on July 26, 2024 on LessWrong. A Solomonoff inductor walks into a bar in a foreign land. (Stop me if you've heard this one before.) The bartender, who is also a Solomonoff inductor, asks "What'll it be?". The customer looks around at what the other patrons are having, points to an unfamiliar drink, and says "One of those, please.". The bartender points to a drawing of the same drink on a menu, and says "One of those?". The customer replies "Yes, one of those.". The bartender then delivers a drink, and it matches what the first inductor expected. What's up with that? The puzzle, here, is that the two Solomonoff inductors seemingly agree on a categorization - i.e. which things count as the Unnamed Kind Of Drink, and which things don't, with at least enough agreement that the customer's drink-type matches the customer's expectations. And the two inductors reach that agreement without learning the category from huge amounts of labeled data - one inductor points at an instance, another inductor points at another instance, and then the first inductor gets the kind of drink it expected. Why (and when) are the two inductors able to coordinate on roughly the same categorization? Most existing work on Solomonoff inductors, Kolmogorov complexity, or minimum description length can't say much about this sort of thing. The problem is that the customer/bartender story is all about the internal structure of the minimum description - the (possibly implicit) "categories" which the two inductors use inside of their minimal descriptions in order to compress their raw data. The theory of minimum description length typically treats programs as black boxes, and doesn't attempt to talk about their internal structure. In this post, we'll show one potential way to solve the puzzle - one potential way for two minimum-description-length-based minds to coordinate on a categorization. Main Tool: Natural Latents for Minimum Description Length Fundamental Theorem Here's the main foundational theorem we'll use. (Just the statement for now, more later.) We have a set of n data points (binary strings) {xi}, and a Turing machine TM. Suppose we find some programs/strings Λ,{ϕi},Λ',{ϕ'i} such that: Mediation: (Λ,ϕ1,…,ϕn) is an approximately-shortest string such that (TM(Λ,ϕi) = xi for all i) Redundancy: For all i, (Λ',ϕ'i) is an approximately-shortest string such that TM(Λ',ϕ'i) = xi.[1] Then: the K-complexity of Λ' given Λ,K(Λ'|Λ), is approximately zero - in other words, Λ' is approximately determined by Λ, in a K-complexity sense. (As a preview: later we'll assume that both Λ and Λ' satisfy both conditions, so both K(Λ'|Λ) and K(Λ|Λ') are approximately zero. In that case, Λ and Λ' are "approximately isomorphic" in the sense that either can be computed from the other by a short program. We'll eventually tackle the customer/bartender puzzle from the start of this post by suggesting that Λ and Λ' each encode a summary of things in one category according to one inductor, so the theorem then says that their category summaries are "approximately isomorphic".) The Intuition What does this theorem mean intuitively? Let's start with the first condition: (Λ,ϕ1,…,ϕn) is an approximately-shortest string such that (TM(Λ,ϕi) = xi for all i). Notice that there's a somewhat-trivial way to satisfy that condition: take Λ to be a minimal description of the whole dataset {xi}, take ϕi=i, and then add a little bit of code to Λ to pick out the datapoint at index ϕi[2]. So TM(Λ,ϕi) computes all of {xi} from Λ, then picks out index i. Now, that might not be the only approximately-minimal description (though it does imply that whatever approximately-minimal Λ,ϕ we do use is approximately a minimal description for all of x). ...

PyTorch Developer Podcast
Inductor - Post-grad FX passes

PyTorch Developer Podcast

Play Episode Listen Later Apr 12, 2024 24:07


The post-grad FX passes in Inductor run after AOTAutograd has functionalized and normalized the input program into separate forward/backward graphs. As such, they generally can assume that the graph in question is functionalized, except for some mutations to inputs at the end of the graph. At the end of post-grad passes, there are special passes that reintroduce mutation into the graph before going into the rest of Inductor lowering which is generally aware of passes. The post-grad FX passes are varied but are typically domain specific passes making local changes to specific parts of the graph.

fx post grad inductor
PyTorch Developer Podcast
PT2 extension points

PyTorch Developer Podcast

Play Episode Listen Later Feb 5, 2024 15:54


We discuss some extension points for customizing PT2 behavior across Dynamo, AOTAutograd and Inductor.

PyTorch Developer Podcast
Inductor - Define-by-run IR

PyTorch Developer Podcast

Play Episode Listen Later Jan 24, 2024 12:06


Define-by-run IR is how Inductor defines the internal compute of a pointwise/reduction operation. It is characterized by a function that calls a number of functions in the 'ops' namespace, where these ops can be overridden by different handlers depending on what kind of semantic analysis you need to do. The ops Inductor supports include regular arithmetic operators, but also memory load/store, indirect indexing, masking and collective operations like reductions.

define ir inductor
PyTorch Developer Podcast
Inductor - IR

PyTorch Developer Podcast

Play Episode Listen Later Jan 16, 2024 18:00


Inductor IR is an intermediate representation that lives between ATen FX graphs and the final Triton code generated by Inductor. It was designed to faithfully represent PyTorch semantics and accordingly models views, mutation and striding. When you write a lowering from ATen operators to Inductor IR, you get a TensorBox for each Tensor argument which contains a reference to the underlying IR (via StorageBox, and then a Buffer/ComputedBuffer) that says how the Tensor was computed. The inner computation is represented via define-by-run, which allows for compact definition of IR representation, while still allowing you to extract an FX graph out if you desire. Scheduling then takes buffers of inductor IR and decides what can be fused. Inductor IR may have too many nodes, this would be a good thing to refactor in the future.

MotorMouth Radio
Clothes dryers, cool tools & unsupported forward flexion

MotorMouth Radio

Play Episode Listen Later Sep 4, 2022 59:25


This week the boys stand behind the mantra that "they fix everything" when Ray recountsa story about getting his non-working clothes dryer back in service. This leads to talk about sound diagnostic practices and how working on cars prepared them to tackle almost any mechanical task. Chris marvels at the ease of setup and operation of the QuickJack Ray just bought, which leads to a caller who knows more about the body than we do talking about "unsupported forward flexion". We cap it off with another home repair story where the Inductor Tool was used.

MotorMouth Radio
Bondo use at home, confidence inspiring cars & odd hardware

MotorMouth Radio

Play Episode Listen Later Mar 13, 2022 57:48


Chris explains how and why he's been using automotive Bodno on his house repair projects, and what product to use for ceramic tile resurfacing. Ray asks if there's ever been a car in the stable that boosted driver confidence, or arrogance. Hardware takes over the show with clutch head and Reed Prince screws taking center stage, followed by the boys favorite heating tool, the Inductor.

Adafruit Industries
Hello Inductor - Collin's Lab Notes

Adafruit Industries

Play Episode Listen Later Sep 9, 2021 0:59


Say hello to the lesser known passive component - the inductor #adafruit #collinslabnotes Shop components at Adafruit: https://www.adafruit.com/category/54 Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------

adafruit inductor adafruit learning system
EEVblog
EEVblog 1409 – The DANGERS of Inductor Back EMF

EEVblog

Play Episode Listen Later Jul 31, 2021 29:51


A practical demonstration of Lenz's law and back EMF in an inductive relay coil and how to solve it using a Freewheeling/Flywheel/Flyback/Snubber/Clamp diode. Also the downsides of clamping diodes, and switch arcing supression. Also a look at an AMAZING potential phenomenon you probably haven't seen before! Actually, two rather cool things you probably haven't seen ...

dangers emf lenz inductor eevblog
Fullyposeable
Episode 279 “Fullyposeable’s 2021 #Figlife Hall of Fame”

Fullyposeable

Play Episode Listen Later May 23, 2021 71:28


This week’s show is all about the Fullyposeable #Figlife Hall of fame. In the little bit of news, Junk Shop Dog did show off their version of the Dynamite Kid. AEW did show off series 6. And Zombie Sailor introduces his next signee. Thank you to all the Inductor’s for their time. And thank you to CJ Mullins for making the awesome pictures   Make sure to follow us on Twitter, Youtube, Snap chat and Facebook @Fullyposeable.  Instagram is @FullyposeableWFP.  You can email us any questions at Fullyposeablewfp@gmail.com.  Purchase our shirts and more at Whatamaneuver.net, Pro Wrestling Tee’s and RedBubble.

That's So Semi-Sweet
ENTP型エンジニアの人生設計 | How to Engineer Your Life with Kohei aka inductor

That's So Semi-Sweet

Play Episode Listen Later May 12, 2021 55:04


第46回のTSSSは、hinaのSNSフレンド - inductorさんことKoheiさんがゲストに登場!初の年上の日本人男性ゲストということで、yuiとhinaもドキドキです。高専への進学と中退、ニートからエンジニアとして活躍するまでの道のり、インターネットの世界など、経験豊富な彼の人生について伺います。未来のエンジニアを育成する彼のキャリアアドバイスもお見逃しなく!ぜひお楽しみください! In today's episode yui and hina are joined by Kohei. Kohei is an architectural engineer, tech expert, and engineering mentor. They discuss what engineering really is, how to find the career that fits you, how to pivot in life, the importance of not knowing, and so much more. They also discuss the world of twitter, communication, identity and personality types. Hope you enjoy! inductorさんとコネクト!@_inductor_ インスタグラムのフォローはこちらから! Talk to us on Instagram! tsss: @thatssosemisweet hina: @hinakadoya yui: @yui_nicole music by scottholmesmusic.com

Wrong Station
77 - The Inductor

Wrong Station

Play Episode Listen Later Mar 1, 2021 32:26


It began with a deep hum somewhere in the wall behind Samo, and then a tingle of static in the air. “Sail away, sail away, hah...” Doctor Martel said from behind the one-way glass, flicking a series of unseen switches to power up the machine. “Just like falling asleep.” Only it wasn’t like falling asleep; just falling. The Wrong Station contains explicit content and mature themes. Episode-specific warnings can be found at www.wrongstation.com

Cultura popular
Luis Antonio Sánchez Guzmán hablando de política, deportes, cultura y algo más.

Cultura popular

Play Episode Listen Later Feb 20, 2021 45:32


Luis Antonio Sánchez-Guzmán. Originario de la Ciudad de México y radicado en la Ciudad de Mérida, Yucatán desde la edad de 5 años, corriendo sangre maya en sus venas; se considera a sí mismo, yucateco por decisión y de corazón. Sus padres, un maestro de música y una médico familiar, le fue inculcado el interés por el servir a la sociedad, a través de diversos medios: deporte, cultura, estudios, entre otros; teniendo experiencias que desarrollaron su interés por el entorno donde se desenvolvía. Es Bachiller en el Área de Ciencias Sociales, así como Licenciado en Derecho por la Universidad Autónoma de Yucatán (UADY); Maestro en Administración de Negocios con Especialidad en Finanzas, por la Universidad del Valle de México (UVM). Cuenta con un Diplomado en Presupuesto Basado en Resultados por la Secretaría de Hacienda y Crédito Público (SHCP), en conjunto con la Universidad Nacional Autónoma de México (UNAM), así como un Entrenamiento y Capacitación en el Programa I-corps, impartida en los Nodos Binacionales de Innovación del Sureste (NoBI Sureste), donde participó con el emprendimiento de un antibiótico de chile de alta pureza. Tiene un amplio gusto por la poesía libre, la fotografía, la historia de México; así como por la música folclórica, las artes plásticas y el deporte de conjunto. En el Colegio de Músicos de Yucatán, A.C. (COMY), así como en la Asociación de Hockey del Estado de Yucatán (AHY), se dedicó a temas de administración, docencia y capacitación. En este último destaca su participación con el representativo yucateco de hockey de pasto, en la categoría Sub-21, que obtuvo el 3er lugar en la Olimpiada Nacional Juvenil 2001, al ganarle al Selección Estatal de Jalisco. Realizó su Servicio Social en la entonces Procuraduría General de Justicia del Estado de Yucatán y sus Prácticas Profesionales en el despacho jurídico “Rojano-Ávila-Ceballos Abogados” (R.A.C. Abogados). También laboró como Capturista de Datos Registrales en el Registro Público de la Propiedad y el Comercio del Estado de Yucatán; así como en la Secretaría de Desarrollo Urbano y Vivienda de la CDMX. donde ocupó diversos cargos. Desde 2017 es miembro activo del Grupo Liberal Juarista, “Fraternidad y Justicia”, donde tratan temas de índole educativo, histórico, político y social; en el cual fungió como Secretario Titular del Consejo Directivo en 2018 y actualmente se desempeña como socio expositor. Finalmente, en 2019 se integra a la Asociación de Hockey de Universidad Nacional Autónoma de México (AHUNAM) como Entrenador Auxiliar, asi como Inductor de nuevos jugadores y está en espera de la reapertura de los campus de estudios de la UNAM en Yucatán para trabajar en la formación de equipos de hockey. Actualmente, ejerce su profesión y estudios en su despacho, denominado Servicio de Gestoría Administrativa y Jurídica (SGJA), en la cual ejerce junto con un socio, donde tratan asuntos diversos; así como atender a clientes de escasos recursos brindando la asesoría necesaria para los problemas legales que presentan. Poco a poco se forja un destino, pero el cual dependerán de lo que él haga y de la constancia que nos vaya mostrando; solo así sabremos cual será el siguiente paso en esta historia. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/excelenteservicio/support

しがないラジオ
sp.89【ゲスト: _inductor_】楽しいWeb系からエンタープライズインフラ企業にあえて転職した理由

しがないラジオ

Play Episode Listen Later Nov 11, 2020 90:13


inductorさんをゲストにお迎えして、MLOps、ソリューションアーキテクト、Cloud Native、Kubernetes、などについて話しました。 【Show Notes】 ZOZOTOWNで「似ている商品」を画像検索 AI使った新機能 ZOZOテ クノロジーズを退職します - inductor's blog Hewlett Packard Enterprice CNCF (Cloud Native Computinc Foundation) Ambassadors | Cloud Native Computing Foundation CLOUDNATIVE DAYS TOKYO 2020 CX(顧客体験)プラットフォーム KARTE(カルテ) WSL 2 と WSL 1 の比較 | Microsoft Docs Among Us 配信情報はtwitter ID @shiganaiRadio で確認することができます。 フィードバックは(#しがないラジオ)でつぶやいてください! 感想、話して欲しい話題、改善して欲しいことなどつぶやいてもらえると、今後のポッドキャストをより良いものにしていけるので、ぜひたくさんのフィードバックをお待ちしています。 【パーソナリティ】 gami@jumpei_ikegami zuckey@zuckey_17 【ゲスト】 Nokogiri@_inductor_ 【機材】 Blue Micro Yeti USB 2.0マイク 15374

QSO Today - The oral histories of amateur radio
Episode 257 Ray Heffer G4NSJ

QSO Today - The oral histories of amateur radio

Play Episode Listen Later Jul 5, 2019 70:00


Ray Heffer, G4NSJ, commandeered the family shortwave radio, as a kid,  to further his interest in shortwave listening and ultimately amateur radio. Ray made his career as a television, radio, and audio visual equipment repairman, and now specializes in the repair and restoration of 1940’s vintage AM radio sets.  I found this endeavor to be fascinating and Ray has created some clever and novel solutions to keep these radios entertaining their owners in the QSO Today.

LTTN Podcast
Sentinel Comics RPG Stolen Legacy Part 3

LTTN Podcast

Play Episode Listen Later Feb 22, 2019 45:56


Having captured Inductor in the park and learning the Vandals plans, the heroes of Daybreak receive aid from the Sentinels of Freedom and head off to The Megalopolis Museum of Art. The Fan Favorite: A Masks RPG Zine http://kck.st/2SvSsZW You can get Stolen Legacy here: https://store.greaterthangames.com/sentinel-comics-the-roleplaying-game-stolen-legacy-one-shot-adventure-digital.html and check out the Sentinel Comics RPG Kicksarter here: http://kck.st/2WAHz7I “Hero Down”, “Monkeys Spinning Monkeys”, “Water Prelude”, “The Descent”, “Chase Pulse”, “Black Vortex”, "The Cannery" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 creativecommons.org/licenses/by/3.0

しがないラジオ
sp.48【ゲスト: __inductor_】中卒(?)元ニートが語るDevOpsの魅力とネトゲで捗る楽しい英語学習

しがないラジオ

Play Episode Listen Later Dec 31, 2018 107:52


inductorさんをゲストにお迎えして、「論理中卒」、SES、DevOps、Docker、コンテナ、英語学習、さわやか、などについて話しました。 【Show Notes】 株式会社ZOZOテクノロジーズ 国立高専機構 沼津高専 SES契約 - Wikipedia Wantedly DevOps - Wikipedia Docker - Wikipedia Kubernetes - Wikipedia ownCloud - Wikipedia ConoHa インフラ勉強会 Japan Container Days moby/moby haconiwa/haconiwa Kubernetes完全ガイド | 青山 真也 | Amazon Docker/Kubernetes 実践コンテナ開発入門 | 山田 明憲 | Amazon バック・トゥ・ザ・フューチャー - Wikipedia マイ・インターン - Wikipedia HiNative Discord - Wikipedia 炭焼きレストランさわやか - Wikipedia Deploy.am 求人情報 - 株式会社ZOZOテクノロジーズ 配信情報はtwitter ID @shiganaiRadio で確認することができます。 フィードバックは(#しがないラジオ)でつぶやいてください! 感想、話して欲しい話題、改善して欲しいことなどつぶやいてもらえると、今後のポッドキャストをより良いものにしていけるので、ぜひたくさんのフィードバックをお待ちしています。 【パーソナリティ】 gami@jumpei_ikegami zuckey@zuckey_17 【ゲスト】 inductor@_inductor_ 【機材】 Blue Micro Yeti USB 2.0マイク 15374

AmateurLogic.TV
Ham College 33

AmateurLogic.TV

Play Episode Listen Later Sep 23, 2017


Foreign contacts and third party traffic. Series and parallel Inductors. Inductor demonstration. 01:06:39

AmateurLogic.TV (Audio)

Foreign contacts and third party traffic. Series and parallel Inductors. Inductor demonstration. 01:06:39

QSO Today - The oral histories of amateur radio
Episode 143 Roy Lewallen W7EL

QSO Today - The oral histories of amateur radio

Play Episode Listen Later Apr 28, 2017 62:42


Roy Lewallen, W7EL, approaches ham radio from an engineering and even scientific point of view.  Roy is the creator of EZNEC, the popular antenna modeling software, and the author of many articles on radio and transceiver design, baluns, antennas, and inductors.  His contributions to the amateur radio hobby and advancing the state of art puts Roy at the head of the class in amateur radio.  W7EL is my QSO Today.

TDV FAQ's. Teoría del Delito. (Derecho Penal I)
TDV FAQ s 18. Autor mediato VS inductor (Derecho Penal I-UMH-Fernando Miró)

TDV FAQ's. Teoría del Delito. (Derecho Penal I)

Play Episode Listen Later Jul 1, 2015 100:00


TDV FAQ´s. ¿Cuándo alguien es autor mediato y cuándo sólo será inductor? Teoría del Delito. Grado en Derecho. Grupo Semipresencial Profesor: Fernando Miró Llinares. Dpto. de Ciencia Jurídica. Área de Derecho Penal. Proyecto PLE. Universidad Miguel Hernández de Elche.

Second order modelling - Modelling, analysis and control
2nd order modelling 2 - Resistor-inductor-capacitor

Second order modelling - Modelling, analysis and control

Play Episode Listen Later Feb 3, 2014 9:25


Extends the 1st order modelling videos to show the derivation of a model for a simple 2nd order electrical circuit comprising resistor, inductor and capacitor. Reiterates messages on analogies with mechanical systems.

First Order Modelling - Modelling, analysis and control
1st order modelling 4 - resistor-inductor

First Order Modelling - Modelling, analysis and control

Play Episode Listen Later Nov 5, 2013 9:46


Derives models for series resistor-inductor circuits and discusses analogies with mass-damper systems and summarises broader analogies between electrical and mechanical systems. Brief consideration of parallel resistor-inductor circuit.