Podcasts about vlsi

Process of creating an integrated circuit by combining thousands of transistors into a single chip

  • 52PODCASTS
  • 86EPISODES
  • 43mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jun 14, 2025LATEST
vlsi

POPULARITY

20172018201920202021202220232024


Best podcasts about vlsi

Latest podcast episodes about vlsi

華視三國演議
一起來造護國神山!|#曹興誠 #矢板明夫 #汪浩|@華視三國演議|20250614

華視三國演議

Play Episode Listen Later Jun 14, 2025 52:40


新鮮事、新奇事、新故事《一銀陪你聊“新”事》 第一銀行打造公股銀行首創ESG Podcast頻道上線啦 由知名主持人阿Ken與多位名人來賓進行對談 邀請您一起落實永續發展 讓永續未來不再只是想像 各大收聽平台搜尋:ㄧ銀陪你聊新事 https://sofm.pse.is/7qknax -- 小福利麻辣鍋-最強麻辣火鍋加豐盛Buffet,平日698起,美味通通無限享用! 有頂級和牛、安格斯黑牛、天使紅蝦,多款海陸食材吃到飽! 還有炸蝦天婦羅、職人炙燒握壽司、以及哈根達斯! 美味一次滿足,請搜尋「小福利麻辣鍋」 https://sofm.pse.is/7qr2q9 ----以上訊息由 SoundOn 動態廣告贊助商提供---- 全球對台灣晶片的依賴愈來愈深,「矽盾」的戰略意涵何在?它是保障、還是風險?在中美科技戰與地緣政治緊張的當下,台灣的企業與政府應如何應對這場大國競逐?聯電創辦人曹興誠在紀錄片「造山者」中受訪,當年進入工研院電子研究所參與RCA技術轉移計畫,被視為台灣半導體產業的起點。從工研院到創辦聯電,成為台灣第一家民營半導體業者,初期面對技術、市場與資金等挑戰,是如何克服的?是否有一個關鍵時刻改變了聯電的命運?曹興誠在1990年代主導聯電轉型為純代工模式,這樣的決策為何能成功?當時是否遭遇內部阻力?聯電與台積電有什麼本質上的差異?回顧工研院技術商轉與聯電的成功經驗,這套模式今天還適用於其他創新產業嗎?台灣半導體能有今天的成就,政府政策扮演了什麼角色?站在企業家與公民的雙重身分上,曹興誠對下一代台灣青年與創業者有什麼期待與建議?精彩訪談內容,請鎖定@華視三國演議! 本集來賓:#曹興誠 #矢板明夫 主持人:#汪浩 以上言論不代表本台立場 #造山者 #聯電 #半導體 #AI 電視播出時間

PC Perspective Podcast
Podcast #825 - AMD Radeon RX 9060 XT 16GB Review, Microsoft Wants to Fix USB-C, Nvidia Pricing, Cyberpunk 2 + MORE!

PC Perspective Podcast

Play Episode Listen Later Jun 6, 2025 92:57


The wait is over - we have the only JoshTEKK review of the new AMD Radeon RX 9060 XT on YouTube. You're welcome.  We also have a serious discussion of Molex.  And Nvidia GPU availability rumors.  And Fosi audio.  And of course, Zero-Day Chrome exploits.Timestamps:00:00 Intro00:49 Patreon03:05 Food with Josh04:44 AMD Radeon RX 9060 XT - the JoshTEKK review20:35 PCB and die shots from TechPowerUp21:45 OC with undervolting hits 3.5 GHz at under 200W23:48 That 1440/ultra review27:43 Unverified report of NVIDIA cutting RTX 50 series production30:49 Microsoft is going to fix USB-C36:11 FOSI Audio has a gaming DAC/AMP39:38 VLSI exists only as a patent troll, may not get Intel billions anymore42:39 Molex has the solution to your PCI-E 7.0 cabling needs46:09 Podcast sponsor NordLayer47:51 (in)Security Corner1:05:24 Gaming Quick Hits1:14:29 Picks of the Week1:24:36 Interlude - Sebastian is afk and the other panelists offer some deep thoughts1:25:29 Picks of the Week continues 1:31:10 Outro ★ Support this podcast on Patreon ★

Hacker Public Radio
HPR4333: A Radically Transparent Computer Without Complex VLSI

Hacker Public Radio

Play Episode Listen Later Mar 12, 2025


This show has been flagged as Clean by the host. TITLE A Radically Transparent Computer Without Complex VLSI VENUE 1st IEEE Conference on Secure and Trustworthy CyberInfrastructure for IoT and Microelectronics (SaTC 2025), Wright State University, February 25-27, 2025. This is a recording of the final rehearsal that occurred three hours before this invited talk. No slides were used. ABSTRACT Foreign adversaries have colonized America's computers from at least 1986. Four decades later, online safety is the largest failure in the history of human engineering. Radical stewardship in cybersecurity would bring radical progress, but responsibility for losses will need to flow from the bottom up. The buck stops with victims, who must accept all blame for cyberattacks. Only then will people at risk properly vet the products and vendors they select. A leading challenge in stewardship is balancing the opaque, proprietary nature of VLSI complex logic with the owner's need for complete control. Since these aspects are incompatible and owner control is essential, it's necessary to design computers that avoid complex VLSI entirely. One such architecture, Dauug | 36, is being developed at Wright State University to deliver 36-bit computing, preemptive multitasking, paged virtual memory, and hundreds of opcodes, all without using a single microprocessor or anything like one. BIOGRAPHY Marc Abel is an engineer-scientist specializing in technology that supports civil rights, economic security, and geopolitical stability. He holds a 1991 B.S. in Engineering and Applied Science (focused on computer science) from Caltech, and a 2022 Ph.D. in Computer Science and Engineering from Wright State University. Marc is the sole inventor, architect, implementer, maintainer, documenter, and promoter of the Dauug | 36 open-source minicomputer for critical infrastructure. He is the original and still only author of Dauug | 36's firmware, designer and implementer of Dauug | 36's assembly language and assemblers, writer of several related software tools, especially open-source electronic design automation and simulation tools, and the sole author of Osmin, a real-time operating system (RTOS) kernel for the architecture. He is the writer of 200,000 words of system documentation, including his dissertation and its online continuation called The Dauug House. Provide feedback on this episode.

CERIAS Security Seminar Podcast
Ali Al-Haj, Zero Trust Architectures and Digital Trust Frameworks: A Complementary or Contradictory Relationship?

CERIAS Security Seminar Podcast

Play Episode Listen Later Feb 26, 2025 52:06


This session explores the foundational concepts and practical applications of Zero Trust Architectures (ZTA) and Digital Trust Frameworks (DTF), two paradigms gaining traction in cybersecurity. While Zero Trust challenges the traditional notion of trust by enforcing strict access controls and authentication measures, Digital Trust seeks to build confidence through data integrity, privacy, and ethical considerations. Through this talk, we will investigate whether these approaches intersect, complement, or diverge, and what this means for the future of cybersecurity. Attendees will gain insights into implementing these frameworks to enhance both security and user confidence in digital environments. In addition to a practical overview, this talk will highlight emerging research areas in both domains.  About the speaker: Dr. Ali Al-Haj received his undergraduate degree in Electrical Engineering from Yarmouk University, Jordan, in 1985, followed by an M.Sc. degree in Electronics Engineering from Tottori University, Japan, in 1988 and a Ph.D. degree in Computer Engineering from Osaka University, Japan, in 1993. He then worked as a research associate at ATR Advanced Telecommunications Research Laboratories in Kyoto, Japan, until 1995. Prof. Al-Haj joined Princess Sumaya University for Technology, Jordan, in October 1995, where he currently serves as a Full Professor. He has published papers in dataflow computing, information retrieval, VLSI digital signal processing, neural networks, information security, and digital multimedia watermarking.

Hard Reset
BE5 - DVD 2025 (Live)

Hard Reset

Play Episode Listen Later Feb 25, 2025 61:54


היופי בפודקאסט (מבחינתנו) הוא שהוא קורה מאחורי הקלעים. אין לחץ של זמן, אפשר לחזור על דברים וקסם העריכה הוא כבר כמו יד שלישית עבורנו. ולכן מאוד הופתענו כשפרופ׳ אדי תימן הזמין אותנו… להעביר הרצאה. מצאנו את עצמנו מול עשרות סטודנטים ואנשי סגל - מדברים. ואם זה לא מספיק אז אתגרנו את עצמנו והחלטנו להרים פרק בלייב מול הקהל, על הבמה באולם. בפרק הבונוס הזה שוחחנו עם סטודנטים שלמדו בסמסטר האחרון את הקורס #DVD בפקולטה להנדסה באוניברסיטת בר אילן ושמחנו להכיר אנשים מעניינים וסקרנים בלתי נלאים. רשימת התודות והתיוגים פה ארוכה, תחזיקו חזק. ואז לכו להאזין לפרק. רשימת מרואיינים (חלקית) לפי סדר ההופעה בפרק: איתי מרלין, תואר שני בהנדסת חשמל - עם התמחות בתחום ארכיטקטורת חומרה ומתרגל בקורס DVD. נושא מחקר: מאיצי חומרה עמידי שגיאות מבוססי זיכרונות מתקדמים ליישומים גינומיים וסביבות עתירות קרינה בהנחיית פרופ' אלכס פיש וד"ר ליאוניד יביץ. דניאל ברנט, תואר ראשון בהנדסת חשמל - עם התמחות בעיבוד אותות וננו-אלקטרוניקה. נושא מחקר: Measurement and design of gain cell dynamic memories בהנחיית פרופ' אדי תימן. אור שוחט, תואר ראשון בהנדסת חשמל - עם התמחות ב-VLSI וננו-אלקטרוניקה. נושא מחקר: Memory circuit design for Quantum chip applications בהנחיית פרופ' אדי תימן. יובל הררי, תואר ראשון בהנדסת מחשבים - במגמת חומרת מחשבים. נושא מחקר: Associative memories, in-memory computing and hardware accelerated bioinformatics בהנחיית ד"ר ליאוניד יביץ. מנדי גרינוולד, תואר ראשון בהנדסת חשמל - עם התמחות ב-VLSI ואלקטרו-אופטיקה. נושא מחקר: Ultra-Low Power Voltage-Level Sensor for Battery Level Monitoring and Energy Harvesting בהנחיית פרופ' יוסי שור. ירון אלקלאי, תואר ראשון בהנדסת חשמל - עם התמחות בתחום ננו-אלקטרוניקה ואופטיקה. נושא מחקר: CAM (Content Addressable Memory) based Memristors בהנחיית פרופ' אדי תימן. מוזמנים להאזין לפרק להצטרף לקבוצת המאזינים שלנו - שם אנחנו הופכים אתכם לסקרנים בלתי נלאים >>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T נשמח לשמוע את דעתכם על הפרק בתגובות. פרק בונוס - DVD 2025 (Live) Hard Reset - הפודקאסט של קהילת Hardware Engineering Israel. מוזמנים ליצור איתנו קשר במייל podcasthardreset@gmail.com פרק זה הוקלט במהלך מלחמת ״חרבות ברזל״. האזנה נעימה.

Padepokan Budi Rahardjo
Taiwan melindungi diri dengan chips (dan drones)

Padepokan Budi Rahardjo

Play Episode Listen Later Oct 31, 2024 8:34


Ini masih cerita soal semiconductor, VLSI, IC design. Topik ini saya ambil dari tulisan di IEEE Spectrum edisi Oktober 2024.

Hard Reset
E58 - Power Electronics (Mor Peretz)

Hard Reset

Play Episode Listen Later Sep 23, 2024 86:10


התאריך המקורי להקלטת הפרק הזה היה 8.10.23 בשעה 9 בערב.אחרי כמעט שנה מטורפת של כולם, הצלחנו לסדר את כל הכוכבים ולהקליט את הפרק הזה. על פרופסור מור פרץ שמענו די במקרה, ומזל שזה קרה.אחרת - איך היינו יודעים מה זה Power Electronics?מור הוא פרופסור בבית הספר להנדסת חשמל ומחשבים באוניברסיטת בן גוריון, וחוץ מזה הוא גם המנכ״ל של חברת CaPow.נפגשנו במשרדים של נקסט סיליקון, פיטפטנו קצת על נוזלים ועל מכרים משותפים ואז עברנו למנה העיקרית.אז על מה דיברנו? מה זאת ״אלקטרוניקת הספק״?מהן התמורות הפיזיקליות של אלקטרוניקת הספק?מה החברה שלו עושה?מה הסוד לשילוב בין פרופסורה למנכ״לות?החיבור המפתיע שהוביל לפרק הזה?הפרויקט החדש של מור - המרכז לאלקטרוניקת הספק ומעגלים מוכללים באוניברסיטהמסלול VLSI באוניברסיטת בן גוריוןתודה רבה לאור קירשנבוים על התרומה שלו בהוצאת הפרק הזה לפועל.אנחנו מזמינים אתכם להצטרף המאזינים שלנו בווטסאפ - מדברים של על אלקטרוניקה, על הספק, ולפעמים על שניהם (ויש גם מימים מצחיקים) >>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04Tנשמח לשמוע את דעתכם על הפרק בתגובות.האזנה נעימה.#Power #Electronics #Hardware #VLSI #CaPow

Hard Reset
E48 - Tech Lead (Elihai Maicas)

Hard Reset

Play Episode Listen Later May 6, 2024 67:20


הפרק הזה לא ידבר על ניהול.אחת הבחירות שמהנדסים מקבלים בשלב מסוים בקריירה שלהם היא - האם ללכת לכיוון ניהולי, או להמשיך להיות IC - במובן של ״תורם עצמאי״ (Individual Contributer) ולא במובן של מעגל משולב.מהנדס שבוחר להיות IC יוכל להתפתח ולהיות מוביל טכני - להשפיע על החברה גם בלי לנהל אנשים בפועל.חשוב לציין שהבחירה הזו לא קבועה, ואפשר לעשות מעברים בין הבחירות, ויש גם מובילים טכניים שמנהלים מהנדסים, אבל בתור מהנדסים בעצמנו אנחנו רוצים לבודד את הבעיה כדי לנסות להבין אותה.בנקודה הזאת אני רוצה להציג בפניכם את המרואיין שלנו לפרק זה - אליחי מאיקס.אליחי הוא Principal Engineer בחברת Nvidia עם למעלה מ-18 שנות ניסיון. יש לו גם רקע בניהול, וכיום הוא משמש כמוביל טכני בצוות שאחראי על ליבת אחד המוצרים של Nvidia.הפרק הזה יגיע כפרק דואלי לפרק עתידי שידבר על קריירה ניהולית ומהווה מטא-פרק על נושא שהוא אמנם לא טכנולוגי, אבל הוא חלק מהחיים הטכנולוגיים שלנו בתעשייה.מי שאוהב שיעורי בית (כמו ליאור) ולא האזין לפרקים הבאים בעבר, שווה להתחיל איתם כדי לקבל רקע טוב יותר לשיחה עם אליחי:מהנדס וריפיקציה (פרק 8)ארכיטקט צ׳יפ דיזיין (פרק 26)כדאי להצטרף לקבוצת המאזינים שלנו, במקום דרגות של מהנדסים אנחנו מדרגים שם פודקאסטרים >>>https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04Tנשמח לשמוע בתגובות את דעתכם על הפרק.פרק 48 - Tech LeadHard Reset - הפודקאסט של קהילת Hardware Engineering Israel.פרק זה הוקלט במהלך מלחמת ״חרבות ברזל״.האזנה נעימה.

Oral Arguments for the Court of Appeals for the Federal Circuit
VLSI Technology LLC v. Intel Corporation

Oral Arguments for the Court of Appeals for the Federal Circuit

Play Episode Listen Later Apr 3, 2024 25:48


VLSI Technology LLC v. Intel Corporation

Proactive - Interviews for investors
atai Life Sciences advances VLS-01 for treatment-resistant depression

Proactive - Interviews for investors

Play Episode Listen Later Mar 11, 2024 5:02


atai Life Sciences chief scientific officer Dr Srinivas Rao discusses the company's development candidate VLS-01, aimed at treating treatment-resistant depression, with Proactive's Stephen Gunnion. Rao explained that VLS-01 is a formulation of DMT, the active component in ayahuasca, known for its short-duration psychedelic effects. The innovation lies in its delivery method, an oral transmucosal film, resembling a Listerine strip, which dissolves inside the mouth over 20 minutes, providing a patient-friendly alternative to intravenous administration. Rao said this approach is designed to enhance the patient and healthcare provider experience, especially in settings not equipped for IV treatments. The ongoing Phase 1B trial seeks to confirm VLS-01's safety, tolerability, pharmacokinetics (PK), and pharmacodynamics, aiming for a psychoactive effect lasting 30-45 minutes with patients returning to baseline within two hours post-administration. Rao said this trial format follows the US treatment paradigm established by Spravato (esketamine) for depression. The trial will involve 16 healthy volunteers to assess the drug's impact, setting a baseline with intravenous DMT before testing up to three oral doses of VLSI-01. Initial results are expected in the second half of the year, with plans to proceed to Phase 2 trials involving patients with treatment-resistant depression to evaluate the compound's efficacy further. #AtaiLifeSciences #VLSI01 #TreatmentResistantDepression #DMT #PsychedelicMedicine #MentalHealth #Pharmaceuticals #ClinicalTrials #InnovativeTherapies #HealthcareInnovation #ProactiveInvestors #invest #investing #investment #investor #stockmarket #stocks #stock #stockmarketnews

The Core Report
#230 Markets Search For Signals And Direction

The Core Report

Play Episode Listen Later Feb 27, 2024 29:23


On today's episode, financial journalist Govindraj Ethiraj talks to Sunil Nanda, a recognised industry expert in computer architecture and VLSI design (and who also started up the India Design Centre for NVidia in Bangalore in 2004) as well as Govind Iyer, board member at Infosys and chairman of Social Ventures Partners India.  SHOW NOTES(00:00) Stories Of The Day(00:50) Markets search for signals and direction. What will they be?(02:24) Oil prices are down again(03:05) Two companies, both over 75 years old prepare for a fresh battle, in paints. (06:27) India's Chip plans and why NVidia Chips are special.(21:30) How Indian businesses have tens of thousands of crores of unspent money for social projects.For more of our coverage check out thecore.in--Support the Core Report--Join and Interact anonymously on our whatsapp channelSubscribe to our NewsletterFollow us on:Twitter | Instagram | Facebook | Linkedin | Youtube

Patents: Post-Grant Podcast
Behaving Badly: OpenSky v. VLSI and Sanctions at the PTAB

Patents: Post-Grant Podcast

Play Episode Listen Later Feb 15, 2024 22:24


Please join our Intellectual Property and Health Sciences practice groups for our podcast series focused on strategies, trends, and other happenings in post-grant proceedings.In this episode, Troutman Pepper Partner Andy Zappia and Counsel Bryan Smith analyze the sanctions order made public on February 6 in the OpenSky v. VLSI IPR proceeding. They explore how sanctions work at the PTAB, the types of conduct that could expose a party to sanctions, and best practices to avoid them.

Hard Reset
E41 - Integrated Circuits (Prof. Adi Teman)

Hard Reset

Play Episode Listen Later Jan 29, 2024 74:11


התלבטנו איך לקרוא לפרק הזה.מצד אחד, רצינו לקרוא לו 'תהליך הייצור' (Fabrication) כי רצינו לדבר סוף סוף על תהליך הייצור של צ'יפים במפעלים.אבל לאחר שיחה מקדימה עם המרואיין שלנו, הבנו שנצטרך יותר מפרק אחד לכסות את החומר והחלטנו להתחיל במושג יותר מתאים - 'מעגלים משולבים' (Integrated Circuits).יצאנו למסע עם שאלות גדולות:כשאנחנו אומרים צ'יפים, למה אנחנו מתכוונים?מאיפה מגיע המושג צ'יפ ומה הקשר שלו למעגל משולב?השאלות האלה כל כך גדולות, שכדי לענות עליהן שלחנו את ליאור ללמוד את מסלול התקנים באוניברסיטה.סתם, הוא בחר ללכת ללמוד את זה בעצמו.אבל הוא כן למד למבחן בעזרת הקורס האינטרנטי של פרופ' אדי תימן.במקרה, או שלא, הוא גם האורח שלנו בפרק הזה!אמרו שלום לפרופ' אדי תימן מאוניברסיטת בר אילן. אדי בא לדבר איתנו על תהליך הייצור של צ'יפים ופתח לנו (ובקרוב גם לכם) דלתות לעולם מרתק של מעגלים משולבים, תהליך הייצור שלהם, ליתוגרפיה ואפילו יוזמות מעניינות באוניברסיטת בר אילן.אז על מה דיברנו?מי המציאו את המעגל המשולב? מהם ההבדלים בין: מעגל משולב (IC), מעגל משולב גדול (LSI), מעגל משולב גדול מאד (VLSI) ומה יש מעבר?כולם מכירים (אנחנו מקווים) את שמות התפקידים Frontend ו-Backend, אך מאיפה הם מגיעים?כמה שלבים יש בתהליך הייצור של צ'יפ?כמה עולה לייצר צ'יפ בודד?מה זה Yield? (שוב)אנחנו ממשיכים לחקור את העולם המרתק של החומרה והצ'יפים, משתדלים להרחיב את האופקים שלנו ושלכם, ועל הדרך מגלים כמה אנחנו עדיין לא יודעים. שווה לעקוב ולצפות לעוד פרקים עתידיים שאנחנו מתכננים.בקבוצת הווטסאפ שלנו יש מרובע משולב >>>https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T

Hard Reset
E39 - Biggest Chip Size (George Elias)

Hard Reset

Play Episode Listen Later Jan 1, 2024 45:29


בתור ילד הייתי הולך בהפסקה לספריה לפעמים. בשולחן בכניסה לספריה היה מונח תמיד "ספר השיאים של גינס 2002". הסקרנות הטבעית של ילד ומדען לעתיד בערה בי עם כל עמוד ותמונה: הכי רחוק, הכי גבוה, הכי קטן, הכי גדול…יום אחד נתקלתי בשיא מוזר: שיא גינס לצ׳יפ הגדול בעולם מוחזק על ידי צ׳יפ ששוקל אחד עשרה קילו והוא באורך של שלושה מטרים. הצ׳יפ הזה עשוי מתפוחי אדמה (448 ק"ג של תפוחי אדמה ליתר דיוק).ניסינו להיות קצת ילדים ולברר מה יהיה גודל הצ׳יפ הגדול ביותר.כדי לענות על השאלה הזמנו לאולפן הפודקאסטים של Nvidia את המרואיין שלנו - ג'ורג' אליאס.ג׳ורג׳ הוא ראש צוות חלוצי ורב-תחומי ב-Nvidia, צוות שבא לעבוד לרוחב הפרויקטים ולפתור בעיות קשות.ולנו הייתה שאלה קלה עם תשובה מורכבת: מהו גודל הצ'יפ הכי גדול? ומשם השיחה זרמה.אז על מה דיברנו?מה גודל הצ׳יפ הכי גדול בתעשייה ולמה הוא כזה?איך אפשר לייצר צ׳יפ גדול יותר?מה האתגרים בבניית צ׳יפ גדול כל כך?איך נראית חדשנות כשהצ׳יפים שלנו צריכים להיות גדולים יותר ויותר?מהן המגבלות כיום לגודל של צ'יפ?פרק שנולד מסקרנות ושאלה תמימה המשיך ללמידה של ידע רחב ועמוק של התעשייה שלנו והוביל להרבה כיוונים חדשים לפרקים עתידיים.בקבוצת הווטסאפ שלנו יש (תמונות של) צ׳יפים בכל הגדלים >>>https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T

Oral Arguments for the Court of Appeals for the Federal Circuit
VLSI Technology LLC v. Intel Corporation

Oral Arguments for the Court of Appeals for the Federal Circuit

Play Episode Listen Later Oct 5, 2023 47:42


VLSI Technology LLC v. Intel Corporation

Hard Reset
E38 - Emulation Engineer (Saeed Saabni)

Hard Reset

Play Episode Listen Later Sep 26, 2023 49:07


העבודה בתחום הצ'יפים דורשת בדיקות מקיפות ויסודיות לקוד של הצ'יפ (ה-RTL).את הלוגיקה בודקים בסימולציה.אז מהי אמולציה?כדי לענות על השאלה ולהכיר את תפקידו של מהנדס האמולציה הזמנו לאולפן את סעיד סעבני שהקים שלושה צוותי אמולציה בעבר ומומחה בתחומו.אז על מה דיברנו?מה ההבדל בין סימולציה לאמולציה? ומה זה בכלל אמולציה?סוגי האמולציה שקיימים כיוםהאם תמיד צריך אמולציה?בונוס בשביל שי - האם יש DFT גם באמולציה?הפרק הזה משתייך לסדרת הפרקים שעוסקת בהכרת התפקידים של מהנדסי חשמל בתעשייה, אך אנחנו מציעים למי שמתחיל להאזין להקשיב קודם לפרק 7 (מהנדס Logic Design / Frontend) עם אמנון זיידמן על מנת להכיר את המושגים המדוברים בפרק.גרסאות חדשות של Gameboy Emulator תוכלו למצוא (בין היתר) בקבוצת הווטסאפ שלנו >>>https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T

PING
The Chips are down: Moore's Law coming to an end.

PING

Play Episode Listen Later Aug 30, 2023 57:11


In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the coming future of VLSI with Moores law coming to an end. This was motivated by a key presentation made at the most recent ANRW session at IETF117, San Francisco. For over 5 decades we have been able to rely on an annual, latterly bi-annual doubling of speed called Moore's Law, and halving of size of the technology inside a microchip: Very Large Scale Integration (VLSI), the basic building block of the modern age being the transistor. From it's beginnings off the back of the diode, replacing valves but still discrete components, to the modern reality of trillions of logic "gates" on a single chip, everything we have built in recent times which includes a computer, has been built under the model "it can only get cheaper next time round" -But for various reasons explored in this episode, that isn't true any more, and won't be true into the future. We're going to have to get used to the idea it isn't always faster, smaller, cheaper, and this will have an impact on how we design Networks, including details inside the protocol stack which go to processing complexity forwarding those packets along the path. A few times, Both Geoff and myself get our prefixes mixed up and may say millimeters for nanometers or even worse on air. We also confused the order of letters in the company Acronym TSMC -The Taiwan Semiconductor Manufacturing Company. Read more about the end of Moore's law on APNIC Blog and the IETF: Chipping Away at Moore's Law (August 2023, Geoff Huston) It's the End of DRAM As We Know It (July 2023, Philip Levis, IETF117 ANRW session)

Hard Reset
Episode 31 - Test Engineer (Nadav Fitussi)

Hard Reset

Play Episode Listen Later Jun 5, 2023 71:52


על קבוצת המאזינים שלנו שמעתם? בואו יש סטיקרים >>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T ספרו לנו מה דעתכם בתגובות. פרק מספר 31 - Test Engineer  Hard Reset - הפודקאסט של קהילת Hardware Engineering Israel.

Science (Video)
Demystifying VLSI Technology: Exploring Its Future Possibilities

Science (Video)

Play Episode Listen Later May 30, 2023 71:43


Very large-scale integration technology (VLSI) is the magic that helps us cram a huge amount of electronic components onto a tiny microchip, enabling the creation of smaller and more powerful electronic devices that we use in our daily lives. VLSI technology is a continually evolving field, and new advancements and innovations continue to be made by researchers and engineers worldwide. Carver Mead, the 2022 Kyoto Prize Laureate in Advanced Technology is widely regarded as one of the pioneers of modern microelectronics having made significant contributions to the field of VLSI technology and semiconductor devices. Mead is joined by John Smee and Sanjay Jha for a roundtable discussion hosted by UC San Diego professor Andrew Kahng to demystify the technology and explore future possibilities for VLSI. Series: "Computer Science Channel" [Science] [Show ID: 38823]

University of California Audio Podcasts (Audio)
Demystifying VLSI Technology: Exploring Its Future Possibilities

University of California Audio Podcasts (Audio)

Play Episode Listen Later May 30, 2023 71:43


Very large-scale integration technology (VLSI) is the magic that helps us cram a huge amount of electronic components onto a tiny microchip, enabling the creation of smaller and more powerful electronic devices that we use in our daily lives. VLSI technology is a continually evolving field, and new advancements and innovations continue to be made by researchers and engineers worldwide. Carver Mead, the 2022 Kyoto Prize Laureate in Advanced Technology is widely regarded as one of the pioneers of modern microelectronics having made significant contributions to the field of VLSI technology and semiconductor devices. Mead is joined by John Smee and Sanjay Jha for a roundtable discussion hosted by UC San Diego professor Andrew Kahng to demystify the technology and explore future possibilities for VLSI. Series: "Computer Science Channel" [Science] [Show ID: 38823]

Science (Audio)
Demystifying VLSI Technology: Exploring Its Future Possibilities

Science (Audio)

Play Episode Listen Later May 30, 2023 71:43


Very large-scale integration technology (VLSI) is the magic that helps us cram a huge amount of electronic components onto a tiny microchip, enabling the creation of smaller and more powerful electronic devices that we use in our daily lives. VLSI technology is a continually evolving field, and new advancements and innovations continue to be made by researchers and engineers worldwide. Carver Mead, the 2022 Kyoto Prize Laureate in Advanced Technology is widely regarded as one of the pioneers of modern microelectronics having made significant contributions to the field of VLSI technology and semiconductor devices. Mead is joined by John Smee and Sanjay Jha for a roundtable discussion hosted by UC San Diego professor Andrew Kahng to demystify the technology and explore future possibilities for VLSI. Series: "Computer Science Channel" [Science] [Show ID: 38823]

UC San Diego (Audio)
Demystifying VLSI Technology: Exploring Its Future Possibilities

UC San Diego (Audio)

Play Episode Listen Later May 30, 2023 71:43


Very large-scale integration technology (VLSI) is the magic that helps us cram a huge amount of electronic components onto a tiny microchip, enabling the creation of smaller and more powerful electronic devices that we use in our daily lives. VLSI technology is a continually evolving field, and new advancements and innovations continue to be made by researchers and engineers worldwide. Carver Mead, the 2022 Kyoto Prize Laureate in Advanced Technology is widely regarded as one of the pioneers of modern microelectronics having made significant contributions to the field of VLSI technology and semiconductor devices. Mead is joined by John Smee and Sanjay Jha for a roundtable discussion hosted by UC San Diego professor Andrew Kahng to demystify the technology and explore future possibilities for VLSI. Series: "Computer Science Channel" [Science] [Show ID: 38823]

Data Today with Dan Klein
Data Transformation with Heather Savory

Data Today with Dan Klein

Play Episode Listen Later May 24, 2023 25:28


When moving and transforming data, how do you make sure it retains its accuracy?That's the question that drives today's guest, Heather Savory. Heather has 30 years of experience in both the public and private sectors. She currently holds Non-Executive Director roles in the UK Parliament and Ministry of Justice and was previously the Director General for Data Capability at the Office of National Statistics (ONS).We discuss connecting siloed data sources, how to communicate the importance of data with the public, and the benefits of sharing data.00:00 - Intro01:52 - How did Heather experience data and information handling in Parliament? 07:13 - Heather's journey from VLSI chips to government10:59 - There's no digital without the data - what does this mean to Heather?23:57 - Dan's final thoughtsLINKS:Heather Savory: https://www.linkedin.com/in/heather-savory-63055b/?originalSubdomain=ukDan Klein: https://uk.linkedin.com/in/dplkleinZühlke: https://www.zuehlke.com/enWelcome to Data Today, a podcast from Zühlke.We're living in a world of opportunities. But to fully realise them, we have to reshape the way we innovate.We need to stop siloing data, ring-fencing knowledge, and looking at traditional value chains. And that's what this podcast is about. Every two weeks, we're taking a look at data outside the box to see how amazing individuals from disparate fields and industries are transforming the way they work with data, the challenges they are overcoming, and what we can all learn from them.Zühlke is a global innovation service provider. We envisage ideas and create new business models for our clients by developing services and products based on new technologies – from the initial vision through development to deployment, production, and operation.

Hard Reset
Episode 30 - Chip Power (Yuval Bustan)

Hard Reset

Play Episode Listen Later May 22, 2023 82:41


יובל בוסתן הוא ארכיטקט הספק באינטל בעל יותר מ-25 שנים של ניסיון. יובל מביא לנו את הסיפור של ההספק והאמת… הספקנו הרבה. (סטגדיש)יש עוד הרבה בדיחות כאלו בפרק, כדאי להאזין!על מה דיברנו:מהו הספק ואילו סוגי הספק קיימים?מתי דרישת ההספק של צ'יפ נכנסת לפיתוח?איך מודדים הספק? איך התדר, גודל הצ'יפ והמתח משפיעים עליו?כיצד שולטים בהספק וכיצד נמזער הספק חום?מהם ה-tradeoffs של העלאת והורדת הספק?מה זה Overclocking?על קבוצת המאזינים שלנו שמעתם? בואו יש סטיקרים >>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T

Physics (Video)
Carver Mead - 2022 Kyoto Prize Laureate in Advanced Technology: Engineering Concepts Clarify Physical Law

Physics (Video)

Play Episode Listen Later Apr 16, 2023 71:20


Carver Mead is a pioneer of modern microelectronics. He proposed a new methodology, very large-scale integration (VLSI), that would make it possible for creating millions or billions of transistors on a single integrated circuit (microchip). His research investigated techniques for VLSI, designing and creating high-complexity microchips. This design process has advanced electronic technologies and transformed the lives of most of the people inhabiting our planet. Mead also paved the way to VLSI design automation and facilitating the revolutionary development of today's VLSI-based electronics and industry. For his work and contributions, Mead was awarded the 2022 Kyoto Prize in Advanced Technology. In his talk entitled, "Engineering Concepts Clarify Physical Law" Mead will discuss a simplified theory that might serve as an entry point for further development by generations of young people who feel disenfranchised by the existing establishment. Series: "Computer Science Channel" [Science] [Show ID: 38572]

Science (Video)
Carver Mead - 2022 Kyoto Prize Laureate in Advanced Technology: Engineering Concepts Clarify Physical Law

Science (Video)

Play Episode Listen Later Apr 16, 2023 71:20


Carver Mead is a pioneer of modern microelectronics. He proposed a new methodology, very large-scale integration (VLSI), that would make it possible for creating millions or billions of transistors on a single integrated circuit (microchip). His research investigated techniques for VLSI, designing and creating high-complexity microchips. This design process has advanced electronic technologies and transformed the lives of most of the people inhabiting our planet. Mead also paved the way to VLSI design automation and facilitating the revolutionary development of today's VLSI-based electronics and industry. For his work and contributions, Mead was awarded the 2022 Kyoto Prize in Advanced Technology. In his talk entitled, "Engineering Concepts Clarify Physical Law" Mead will discuss a simplified theory that might serve as an entry point for further development by generations of young people who feel disenfranchised by the existing establishment. Series: "Computer Science Channel" [Science] [Show ID: 38572]

University of California Audio Podcasts (Audio)
Carver Mead - 2022 Kyoto Prize Laureate in Advanced Technology: Engineering Concepts Clarify Physical Law

University of California Audio Podcasts (Audio)

Play Episode Listen Later Apr 16, 2023 71:20


Carver Mead is a pioneer of modern microelectronics. He proposed a new methodology, very large-scale integration (VLSI), that would make it possible for creating millions or billions of transistors on a single integrated circuit (microchip). His research investigated techniques for VLSI, designing and creating high-complexity microchips. This design process has advanced electronic technologies and transformed the lives of most of the people inhabiting our planet. Mead also paved the way to VLSI design automation and facilitating the revolutionary development of today's VLSI-based electronics and industry. For his work and contributions, Mead was awarded the 2022 Kyoto Prize in Advanced Technology. In his talk entitled, "Engineering Concepts Clarify Physical Law" Mead will discuss a simplified theory that might serve as an entry point for further development by generations of young people who feel disenfranchised by the existing establishment. Series: "Computer Science Channel" [Science] [Show ID: 38572]

Science (Audio)
Carver Mead - 2022 Kyoto Prize Laureate in Advanced Technology: Engineering Concepts Clarify Physical Law

Science (Audio)

Play Episode Listen Later Apr 16, 2023 71:20


Carver Mead is a pioneer of modern microelectronics. He proposed a new methodology, very large-scale integration (VLSI), that would make it possible for creating millions or billions of transistors on a single integrated circuit (microchip). His research investigated techniques for VLSI, designing and creating high-complexity microchips. This design process has advanced electronic technologies and transformed the lives of most of the people inhabiting our planet. Mead also paved the way to VLSI design automation and facilitating the revolutionary development of today's VLSI-based electronics and industry. For his work and contributions, Mead was awarded the 2022 Kyoto Prize in Advanced Technology. In his talk entitled, "Engineering Concepts Clarify Physical Law" Mead will discuss a simplified theory that might serve as an entry point for further development by generations of young people who feel disenfranchised by the existing establishment. Series: "Computer Science Channel" [Science] [Show ID: 38572]

UC San Diego (Audio)
Carver Mead - 2022 Kyoto Prize Laureate in Advanced Technology: Engineering Concepts Clarify Physical Law

UC San Diego (Audio)

Play Episode Listen Later Apr 16, 2023 71:20


Carver Mead is a pioneer of modern microelectronics. He proposed a new methodology, very large-scale integration (VLSI), that would make it possible for creating millions or billions of transistors on a single integrated circuit (microchip). His research investigated techniques for VLSI, designing and creating high-complexity microchips. This design process has advanced electronic technologies and transformed the lives of most of the people inhabiting our planet. Mead also paved the way to VLSI design automation and facilitating the revolutionary development of today's VLSI-based electronics and industry. For his work and contributions, Mead was awarded the 2022 Kyoto Prize in Advanced Technology. In his talk entitled, "Engineering Concepts Clarify Physical Law" Mead will discuss a simplified theory that might serve as an entry point for further development by generations of young people who feel disenfranchised by the existing establishment. Series: "Computer Science Channel" [Science] [Show ID: 38572]

Physics (Audio)
Carver Mead - 2022 Kyoto Prize Laureate in Advanced Technology: Engineering Concepts Clarify Physical Law

Physics (Audio)

Play Episode Listen Later Apr 16, 2023 71:20


Carver Mead is a pioneer of modern microelectronics. He proposed a new methodology, very large-scale integration (VLSI), that would make it possible for creating millions or billions of transistors on a single integrated circuit (microchip). His research investigated techniques for VLSI, designing and creating high-complexity microchips. This design process has advanced electronic technologies and transformed the lives of most of the people inhabiting our planet. Mead also paved the way to VLSI design automation and facilitating the revolutionary development of today's VLSI-based electronics and industry. For his work and contributions, Mead was awarded the 2022 Kyoto Prize in Advanced Technology. In his talk entitled, "Engineering Concepts Clarify Physical Law" Mead will discuss a simplified theory that might serve as an entry point for further development by generations of young people who feel disenfranchised by the existing establishment. Series: "Computer Science Channel" [Science] [Show ID: 38572]

Hard Reset
Episode 27 - CAD Engineer (Rita Kogan)

Hard Reset

Play Episode Listen Later Apr 10, 2023 47:50


בדרך כלל כשתשאלו מהנדסים רבים בתחום הVLSI שעובדים בסטארטאפים (בעיקר בתחום הבקאנד) הם יגידו לכם שהם כותבים לעצמם את הסקריפטים והאוטומציה. המרואיינת הבאה שלנו לא תסכים לזה.אפילו תתרעם! הפעם ראיינו את ריטה קוגן (לא, אין קשר משפחתי ליובל שלנו) על התפקיד המעניין והמאתגר (והלא כל כך מוכר להפתעתנו) של מהנדס CAD. בפרק תשמעו על:* תפקיד מהנדס CAD ותחומי האחריות שלו* מתי צריך מהנדס CAD בפיתוח מוצר* מה ההבדל בין CAD לIT, DevOps וEDA* כיצד נראה תהליך CI/CD בחברת חומרה הפרק הקיף נושאים רבים ולכן אנחנו ממליצים לרענן את המושגים בBackend, Frontend וAnalog לפני ההאזנה לפרק! (פרקים 7, 8, 11, 12)מה שבטוח, להיות מהנדס CAD זה לא קד, סליחה, קל. יש לנו גם קבוצת מאזינים בווטסאפ. בואו יש סטיקרים>>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T

Hard Reset
Episode 26 - VLSI Chip Architect (Lior Narkis)

Hard Reset

Play Episode Listen Later Mar 27, 2023 49:53


פרק מספר 26 - תפקיד ארכיטקט הצ'יפים.היה לנו העונג לראיין את ליאור נרקיס (ולפני שמתחילות התהיות - לא, לא הזמר) על העולם המורכב והמאתגר של ארכיטקטורת שבבים!בפרק תשמעו על:* הקשר ההדוק בין ארכיטקטורה לכל תחומי השבבים שאותם כבר הכרנו - ומעבר לכך* איך הארכיטקט מבטיח שהצ'יפ אותו הוא מתכנן יהיה בקדמת הטכנולוגיה, יעמוד בדרישות הלקוחות ויתכנס בלוחות הזמנים* איך צוות של ארכיטקטים יכול להכיל מצד סטודנטים מצד אחד ומהנדסים בעלי 20 שנות ניסיון מצד שנייש לנו גם קבוצת מאזינים בווטסאפ. בואו יש סטיקרים >>> https://chat.whatsapp.com/KwUu8pQsxx220qS7AXv04T

Forbes India Daily Tech Brief Podcast
Amazon to integrate logistics network, SmartCommerce with ONDC; Ericsson lays off 8,500 staff

Forbes India Daily Tech Brief Podcast

Play Episode Listen Later Feb 27, 2023 5:56


Amazon announced on February 24 that it will integrate its logistics network and SmartCommerce, a suite of SaaS products built on Amazon Web Services, with the government of India-backed Open Network for Digital Commerce (ONDC). Ericsson has announced the largest layoff in the telecom sector in the current economic slowdown, Reuters reports. Also in this brief, Nokia has announced the G22 – a smartphone with a backplate made from recycled plastic that can be removed easily for DIY repairs; and the latest on how India is stepping up its semiconductor efforts. Notes: Amazon will integrate its logistics network and SmartCommerce, a suite of SaaS products built on Amazon Web Services, with the Open Network for Digital Commerce (ONDC), the open networks-based ecommerce protocols backed by the Indian government. “This will be Amazon's initial collaboration with ONDC as we continue to explore other potential opportunities for stronger integration between the two in future,” the company said in a press release on Feb. 24. In some smartphone news, Nokia has designed one of its next phones to make it easy for repairs, with a design that harkens back to the early years of mobile phones. The Nokia G22, developed by Finnish manufacturer HMD Global, is a standard smartphone with a 6.5-inch screen and a 50-megapixel main camera, but it's the phone's outer shell and insides that make it special, CNBC reports. The handset includes a recyclable plastic back which can be easily removed to swap out broken components, according to CNBC. With tools and repair guides from hardware repair advocacy firm iFixit, a user can remove and replace the phone's cover, battery, screen and charging port. More on Scandinavia, Telecom equipment maker Ericsson will lay off 8,500 employees globally as part of its plan to cut costs, Reuters reported on Feb. 24, citing a memo sent to employees and seen by the news agency. While technology companies such as Microsoft, Meta and Google have laid off tens of thousands of employees citing economic conditions, Ericsson's move would be the largest layoff to hit the telecoms industry, according to Reuters. In some semiconductor industry news, India's minister of state for electronics and IT, Rajeev Chandrasekhar said last week that India Semiconductor Research Centre (ISRC), a private, industry-led research centre, will soon be launched and the existing Semiconductor Laboratory is being modernised and pivoted into a research fab that will be co-located with the ISRC. The minister was speaking at the opening of the second Semicon India FutureDESIGN Roadshow, on Feb. 24, in Bengaluru. The Government plans to introduce an educational curriculum as part of the Future Skills programme, Chandrasekhar said. “It has been developed in collaboration with industry experts and academics. A large number of colleges will have new degrees, new electives, and new certification programs in VLSI.” “We are actively working with fab companies to create on-the-job training type of internships for students in the semiconductor space,” he said, according to a government press release.

Future of Coding
No Silver Bullet by Fred Brooks

Future of Coding

Play Episode Listen Later Feb 11, 2023 180:17


Jimmy and I have each read this paper a handful of times, and each time our impressions have flip-flopped between "hate it so much" and "damn that's good". There really are two sides to this one. Two reads, both fair, both worth discussing: one of them within "the frame", and one of them outside "the frame". So given that larger-than-normal surface for discursive traversal, it's no surprise that this episode is, just, like, intimidatingly long. This one is so, so long, friends. See these withered muscles and pale skin? That's how much time I spent in Ableton Live this month. I just want to see my family. No matter how you feel about Brooks, our thorough deconstruction down to the nuts and bolts of this seminal classic will leave you holding a ziplock bag full of cool air and wondering to yourself, "Wait, this is philosophy? And this is the future we were promised? Well, I guess I'd better go program a computer now before it's too late and I never exist." For the next episode, we're reading a fish wearing a bathrobe. Sorry, it's late and I'm sick, and I have to write something, you know? Links: Fred Brooks also wrote the Mythical Man-Month, which we considered also discussing on this episode but thank goodness we didn't. Also, Fred Brooks passed away recently. We didn't mention it on the show, but it's worth remarking upon. RIP, and thanks for fighting the good fight, Fred. I still think you're wrong about spatial programming, but Jimmy agrees with you, so you can probably rest easy since between the two of us he's definitely the more in touch with the meaning of life. The Oxide and Friends podcast recorded an episode of predictions. Jimmy's Aphantasia motivates some of his desire for FoC tools. Don't miss the previous episode on Peter Naur's Programming as Theory Building, since Ivan references it whilst digging his own grave. Jimmy uses Muse for his notes, so he can highlight important things in two colors — yes, two colors at the same time. Living in the future. For the Shadow of the Colossus link, here's an incredible speedrun of the game. Skip to 10:20-ish for a great programming is like standing on the shoulders of a trembling giant moment. Mu is a project by Kartik Agaram, in which he strips computing down to the studs and rebuilds it with a more intentional design. “Running the code you want to run, and nothing else.” “Is it a good-bad movie, a bad-bad movie, or a movie you kinda liked?” Ivan did some research. Really wish Marco and Casey didn't let him. Jimmy did an attack action so as to be rid of Brook's awful invisibility nonsense. Awful. As promised, here's a link in the show notes to something something Brian Cantrill, Moore's Law, Bryan Adams, something something. Dynamicland, baby! Here's just one example of the racist, sexist results that current AI tools produce when you train them on the internet. Garbage in, garbage out — a real tar pit. AI tools aren't for deciding what to say; at best, they'll help with how to say it. Gray Crawford is one of the first people I saw posting ML prompts what feels like an eternity ago, back when the results all looked like blurry goop but like… blurry goop with potential. Not sure of a good link for Jimmy's reference that Age of Empires II used expert systems for the AI, but here's a video that talks about the AI in the game and even shows some Lisp code. Idris is a language that has a bit of an “automatic programming” feel. The visual programming that shall not be named. When people started putting massive numbers of transistors into a single chip (eg: CPU, RAM, etc) they called that process Very Large Scale Integration (VLSI). Also, remember that scene in the first episode of Halt and Catch Fire when the hunky Steve Jobs-looking guy said "VLSI" to impress the girl from the only good episode of Black Mirror? I'm still cringing. Sally Haslanger is a modern day philosopher and feminist who works with accident and essence despite their problematic past. Music featured in this episode: Never, a song I wrote and recorded on Tuesday after finally cleaning my disgusting wind organ. It was like Hollow Knight in there. Get in touch, ask us questions, send us old family recipes: Ivan: Mastodon • Email Jimmy: Mastodon • Twitter Or just DM us in the FoC Slack. futureofcoding.org/episodes/062See omnystudio.com/listener for privacy information.

Hard Reset
Episode 19 - Memristor (Prof. Shahar Kvatinsky)

Hard Reset

Play Episode Listen Later Dec 19, 2022 99:40


רביעיות.ולא, אנחנו לא מתכוונים למשחק שהיינו משחקים כשהיינו ילדים, אלה לרביעיות בטבע.תסתכלו סביבכם, הרבה מהמדע שאנחנו מכירים מהטבע מגיע ברביעיות:משוואות מקסוול כוחות היסוד רביעיית האלמנטים הבסיסיםאך כשמסתכלים על רכיבי האלקטרוניקה הבסיסים אנו מוצאים שלושה בלבד - קבל, סליל ונגד. האם ייתכן שקיים רכיב רביעי אותו עדיין לא גילינו? אז מסתבר שכן, ונדבר עליו בפרק השבוע! הצטרפו אלינו לפרק המחקר הראשון שלנו! השבוע הצטרף אלינו שחר קוטינסקי לפרק מרתק על ממריסטור, יישומיו וקצת מעבר. נשמח לשמוע מכם וליצור שיח על נושא הפרק, על פרקים שכבר היו ועל פרקים עתידיים שעוד יהיו.מוזמנים גם לדבר איתנו בכל נושא אחר :)הערות ותובנות? הצעות לשת"פים? כתבו לנו! podcasthardreset@gmail.comתהנו ותשתפו עם חברות וחברים!

Quantum
Quantum 43 - Actualités de novembre 2022

Quantum

Play Episode Listen Later Dec 5, 2022 63:49


Événements passésQuandela LOQathon des 7, 8 et 9 novembre 2022. Un hackathon organisé à Jussieu, en partenariat avec QICS, le hub quantique de Sorbonne Université, OVHcloud et le GENCI. Le 8 novembre avait lieu le lancement de EQSI, l'European Quantum Software Institute. https://www.quantum.amsterdam/launch-of-eqsi-european-quantum-software-institute-in-paris/OVHcloud EcoEx On Stage à l'Olympia. Le Summit du cloud provider avec 30 minutes dédiées au Quantique illustré par une interview enregistrée d'Alain Aspect suivie d'un panel Maud Vinet (SiQuance), Valérian Giesz (Quandela) et Christophe Legrand (Pasqal). Octave Klaba (fondateur) et Michel Paulin (DG) ont affirmé l'engagement d'OVHcloud autour du quantique. https://ecoexonstage.ovhcloud.com/en/ à 1h20 dans le replayAutre initiative en Allemagne, lancée et soutenue plus directement par le gouvernent allemand (Germany's Ministry of Economic Affairs and Climate Action, BMWK), autour de Ionoshttps://thequantuminsider.com/2022/11/14/germany-to-create-its-first-quantum-computing-business-cloud/Conférence Innovacs, le 24 novembre : « Scénarisez votre futur » sur le quantique, organisé par le groupement de recherche INNOVACS en sciences sociales de l'innovation.https://my.weezevent.com/soiree-ateliers-design-fiction.https://www.oezratty.net/Files/Work/Olivier%20Ezratty%20Design%20Fiction%20Quantique%20Nov2022.pptxJournée Quantique Minalogic du 4 octobre 2022https://www.youtube.com/playlist?list=PLYkgljOcCgszPUSHLTpXHnKlUAKS0ZFfiWorld Quantum Congress à Washington DC Village français avec Pasqal, Quandela, Alice&Bob et Siquance.https://www.quantum.gov/the-united-states-and-france-sign-joint-statement-to-enhance-cooperation-on-quantum/Événements à venirIEDM avec Maud Vinet « Enabling full fault tolerant quantum computing with silicon based VLSI technologies »https://www.ieee-iedm.org/program-overviewQ2B de Qc-Ware aux USA à Santa Clara dans la Silicon Valley.https://q2b.qcware.com/2022-conferences/silicon-valley/Conférence GDR RO sur la recherche opérationnelle à l'Université de Technologie de Troyes du 17 au 21 avril 2023. “Emerging optimization methods: from metaheuristics to quantum approaches”.https://perso.isima.fr/~lacomme/GT2L/EUME_JE/EUME_Joint_Event.phpÉvénement international avec une formation sur le calcul quantique (tutorials and practical work on quantum computing for optimization) lance par les groupes de travail EUME (Europe) et GT ROQ (France). StartupsLe 29 novembre 2022 avait lieu à Grenoble une conférence de presse pour l'annonce de la création de Siquance, la startup lancée par Maud Vinet, Tristan Meunier et François Perruchot. https://www.siquance.com/Visite de PasqalSee Quantum Feature Maps for Graph Machine Learning on a Neutral Atom Quantum Processor by Boris Albrecht, Loic Henriet et al, November 2022 (19 pages) présenté dans https://medium.com/pasqal-io/predicting-toxicity-with-qubits-c9dd2517df59. Quandela communique sur la mise en route de son premier calculateur quantique dans le cloud. Leur logiciel Perceval est disponible dans le cloud via OVHcloud pour accéder aussi bien à 12 qubits ainsi qu'à de l'émulation classique.https://www.quantum-inspire.com/https://www.quandela.com/wp-content/uploads/2022/11/Quandela-The-first-European-quantum-computer-on-the-cloud-developed-by-Quandela.pdfXanadu lève $100MScienceVisite de l'IRIG à Grenoble.https://thequantuminsider.com/2022/11/10/nqcc-appoints-professor-elham-kashefi-as-chief-scientist/ et … déjeuner avec elle le 8 Annonce d'IBM Osprey et de 433 qubits. https://www.oezratty.net/wordpress/2022/assessing-ibm-osprey-quantum-computer/.Une tribune de Xavier Vasquez d'IBM : https://www.zdnet.fr/actualites/la-decennie-quantique-avance-encore-plus-vite-que-prevu-39950318.htmLa Chine de son côté à 121 qubits supraconducteursDigital simulation of non-Abelian anyons with 68 programmable superconducting qubits by Shibo Xu et al, November 2022 (27 pages).Microsoft Resource EstimatorAssessing requirements to scale to practical quantum advantage by Michael E. Beverland et al, Microsoft Research, November 2022 (41 pages).https://alice-bob.com/2022/11/17/alice-bob-tests-azure-quantum-resource-estimator-highlighting-the-need-for-fault-tolerant-qubits/Architecture de réseaux quantiquesEleni Diamanti et Iordanis Kerenidis ont publié un papier portant sur la simulation d'un réseau quantique urbain avec ressources d'intrication.Quantum City: simulation of a practical near-term metropolitan quantum network par Raja Yehia, Simon Neves, Eleni Diamanti et Iordanis Kerenidis, Sorbonne Université LIP6 and Université Paris Cité IRIF, Novembre 2022 (28 pages).Et dans le domaine, Tom Darras et Julien Laurat du LKB de l'Ecole Normale (et aussi cofondateurs de la startup WeLinQ avec Eleni Diamanti) ont publié un pré-print où ils décrivent un protocole de conversion de qubits photons entre leurs variantes à variables discrètes et continues, permettant d'établir des liaisons distantes entre ordinateurs quantiques.A quantum-bit encoding converter by Tom Darras, Julien Laurat et al, November 2022 (15 pages).Le wormhole de Googlehttps://www.quantamagazine.org/physicists-create-a-wormhole-using-a-quantum-computer-20221130/https://ai.googleblog.com/2022/11/making-traversable-wormhole-with.htmlhttps://www.nature.com/articles/s41586-022-05424-3https://twitter.com/skdh/status/1598175023067717632?s=49&t=RHEFuw_j2qSqJMmRQqrBWw 

THE ONE'S CHANGING THE WORLD -PODCAST
HOW NEUROMORPHIC COMPUTING WILL ACCELERATE ARTIFICIAL INTELLIGENCE - PROF SHUBHAM SAHAY- IIT KANPUR

THE ONE'S CHANGING THE WORLD -PODCAST

Play Episode Listen Later Nov 14, 2022 44:28


#neuromorphic #artificialintelligence #brain #braininspired #computing #toctw #podcast Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements. Neuromorphic engineers draw from several disciplines -- including computer science, biology, mathematics, electronic engineering and physics -- to create artificial neural systems inspired by biological structures. There are two overarching goals of neuromorphic computing (sometimes called neuromorphic engineering). The first is to create a device that can learn, retain information and even make logical deductions the way a human brain can -- a cognition machine. The second goal is to acquire new information -- and perhaps prove a rational theory -- about how the human brain works. Prof Shubham Sahay is the head of the NeuroComputing and Hardware Security (NeuroCHaSe) research group at @IIT Kanpur & they are working on the development of (a) novel hardware platforms to perform energy-efficient computation and (b) novel hardware security primitives to protect the cyber-physical systems against security vulnerabilities and adversary attacks. The research thrust is highly interdisciplinary in nature and exploits principles and knowledge accumulated from semiconductor device physics, VLSI circuit design, computational neuroscience, Machine learning, etc., involving both experimental characterization and theoretical work. https://home.iitk.ac.in/~ssahay/ https://in.linkedin.com/in/shubham-sahay-b1580bb0 http://shubhamsahai.in Dont Forget to Subscribe www.youtube.com/ctipodcast

THE ONE'S CHANGING THE WORLD -PODCAST
HOW NEUROMORPHIC COMPUTING WILL ACCELERATE ARTIFICIAL INTELLIGENCE - PROF SHUBHAM SAHAY- IIT KANPUR

THE ONE'S CHANGING THE WORLD -PODCAST

Play Episode Listen Later Nov 7, 2022 44:28


#neuromorphic #artificialintelligence #brain #braininspired #computing #toctw #podcast Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements. Neuromorphic engineers draw from several disciplines -- including computer science, biology, mathematics, electronic engineering and physics -- to create artificial neural systems inspired by biological structures. There are two overarching goals of neuromorphic computing (sometimes called neuromorphic engineering). The first is to create a device that can learn, retain information and even make logical deductions the way a human brain can -- a cognition machine. The second goal is to acquire new information -- and perhaps prove a rational theory -- about how the human brain works. Prof Shubham Sahay is the head of the NeuroComputing and Hardware Security (NeuroCHaSe) research group at @IIT Kanpur & they are working on the development of (a) novel hardware platforms to perform energy-efficient computation and (b) novel hardware security primitives to protect the cyber-physical systems against security vulnerabilities and adversary attacks. The research thrust is highly interdisciplinary in nature and exploits principles and knowledge accumulated from semiconductor device physics, VLSI circuit design, computational neuroscience, Machine learning, etc., involving both experimental characterization and theoretical work. https://home.iitk.ac.in/~ssahay/ https://in.linkedin.com/in/shubham-sahay-b1580bb0 http://shubhamsahai.in

Moore's Lobby: Where engineers talk all about circuits
AMD Low-Power Guru Addresses the Looming Electronics Power Challenge

Moore's Lobby: Where engineers talk all about circuits

Play Episode Listen Later Oct 25, 2022 73:19


Starting his engineering education by taking classes at Caltech under Carver Mead, one of “one of the luminaries of computer VLSI design,” Sam Naffziger “really got excited about the VLSI design field” early in his career. That excitement hasn't waned a bit as he continues to tackle important challenges in low-power circuit and system design.  Low-power design techniques like boost and adaptive clocking were brand new in the early 2000s, and not much interest to teams focused almost solely on performance. So, Sam had to sneak some of those low-power features into early designs: There was another engineer who had a little tiny little microcontroller for other functions to manage the I/O interfaces, and so I managed to get a backdoor path into that microcontroller and some code space so we could actually sneak in, so that the design leads didn't actually know we had this back door. And the rest, as the saying goes, is history: So we got this stuff in there, and it proves so valuable…that suddenly it became an essential element for all future processors. So valuable that it is now used in everything from smartphones to desktop PCs and the latest supercomputers. Naffziger has had such a fascinating career in the integrated circuit world that you will not want to miss a minute of this Moore's Lobby interview with our host Daniel Bogdanoff. Some of the other great topics in this episode are: -Early developments of in-order and out-of-order computer architectures -Why AMD pays attention to the overclocking community -Is performance per watt more important than raw performance?  -Sam's key role in one of the most famous Caltech pranks of all time!

Astro arXiv | all categories
XPOL-III: a New-Generation VLSI CMOS ASIC for High-Throughput X-ray Polarimetry

Astro arXiv | all categories

Play Episode Listen Later Sep 25, 2022 0:42


XPOL-III: a New-Generation VLSI CMOS ASIC for High-Throughput X-ray Polarimetry by M. Minuti et al. on Sunday 25 September While the successful launch and operation in space of the Gas Pixel Detectors onboard the PolarLight cubesat and the Imaging X-ray Polarimetry Explorer demonstrate the viability and the technical soundness of this class of detectors for astronomical X-ray polarimetry, it is clear that the current state of the art is not ready to meet the challenges of the next generation of experiments, such as the enhanced X-ray Timing and Polarimetry mission, designed to allow for a significantly larger data throughput. In this paper we describe the design and test of a new custom, self-triggering readout ASIC, dubbed XPOL-III, specifically conceived to address and overcome these limitations. While building upon the overall architecture of the previous generations, the new chip improves over its predecessors in several, different key areas: the sensitivity of the trigger electronics, the flexibility in the definition of the readout window, as well as the maximum speed for the serial event readout. These design improvements, when combined, allow for almost an order of magnitude smaller dead time per event with no measurable degradation of the polarimetric, spectral, imaging or timing capability of the detector, providing a good match for the next generation of X-ray missions. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2208.14103v3

Astro arXiv | all categories
XPOL-III: a New-Generation VLSI CMOS ASIC for High-Throughput X-ray Polarimetry

Astro arXiv | all categories

Play Episode Listen Later Sep 22, 2022 0:49


XPOL-III: a New-Generation VLSI CMOS ASIC for High-Throughput X-ray Polarimetry by M. Minuti et al. on Thursday 22 September While the successful launch and operation in space of the Gas Pixel Detectors onboard the PolarLight cubesat and the Imaging X-ray Polarimetry Explorer demonstrate the viability and the technical soundness of this class of detectors for astronomical X-ray polarimetry, it is clear that the current state of the art is not ready to meet the challenges of the next generation of experiments, such as the enhanced X-ray Timing and Polarimetry mission, designed to allow for a significantly larger data throughput. In this paper we describe the design and test of a new custom, self-triggering readout ASIC, dubbed XPOL-III, specifically conceived to address and overcome these limitations. While building upon the overall architecture of the previous generations, the new chip improves over its predecessors in several, different key areas: the sensitivity of the trigger electronics, the flexibility in the definition of the readout window, as well as the maximum speed for the serial event readout. These design improvements, when combined, allow for almost an order of magnitude smaller dead time per event with no measurable degradation of the polarimetric, spectral, imaging or timing capability of the detector, providing a good match for the next generation of X-ray missions. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2208.14103v2

Hard Reset
Episode 9 - Formal Verification Engineer (Or Drucker)

Hard Reset

Play Episode Listen Later Aug 1, 2022 62:08


מי הוא מהנדס הפורמל ומה הוא עושה? איך נראה יום עבודה שלו? עם מי הוא עובד? למה צריך וריפיקציה רגילה אם קיים פורמל? מי קדם למי?בפרק אירחנו את אור דרוקר שענה לנו על השאלות האלה ועל עוד המון אחרות.אור בעל 4 שנים של ניסיון בתחום והציג את התחום בצורה יפה.הצטרפו אלינו לפרק שבו התראיין אצלנו מהנדס פורמל ושאלנו את השאלות שחשבנו שתרצו תשובות עליהן. אם יש לכם עוד שאלות שלא ענינו בפרק, כתבו לנו :)podcasthardreset@gmail.com

Hard Reset
Episode 8 - Design Verification Engineer (Hadar Yehudar)

Hard Reset

Play Episode Listen Later Jul 18, 2022 46:25


איך נראית שגרה של מהנדס וריפיקציה? מהם הממשקי עבודה שלו ומי מתאים לתפקיד כזה? אם המהנדס וריפיקציה בודק את הקוד של מישהו אחר, מי בודק לו את הקוד?כדי לענות על השאלות אלו ועוד אירחנו את הדר וראיינו אותו על הקריירה שלו.הדר מגיע אלינו לפרק עם 20 שנה של ניסיון בתחום והבנה עמוקה של התפקיד וכיצד להציג אותו למי שלא מכיר את התחום.

Hard Reset
Episode 7 - Front End Engineer (Amnon Zaideman)

Hard Reset

Play Episode Listen Later Jul 4, 2022 55:40


בפרק הראשון דיברנו על מחזור החיים של צ'יפ ועל תפקידים לאורכו שדואגים לפתח אותו. אחד התפקידים הוא ה-Frontend Engineer.במה Frontend Engineer עוסק? מה הקשיים בפיתוח קוד חומרה?איך נראה היום-יום של מהנדס Frontend? איך אני יודע האם אני מתאים לתפקיד ולאן אני אוכל להתפתח בהמשך?כדי לענות על השאלות האלו, אירחנו את אמנון זיידמן וראיינו אותו על הקריירה שלו.שמענו מסלול לא שגרתי ומרתק עבור מהנדס חשמל, על הניסיון האדיר שצבר ועל הטכנולוגיה של Thunderbolt שהוא היה מעורב בתהליך הפיתוח שלה ואתם נהנים ממנה כיום.

20 Minute Leaders
Ep834: Daniel Lischinsky | Founder & CEO, Ohh-Med Medical Ltd.

20 Minute Leaders

Play Episode Listen Later Jun 18, 2022 24:38


Daniel is the Founder and CEO of Ohh-Med Medical. He is an Computer Science Engineer from the Technion Technological institute at Israel and specialist in VLSI technologies. He has  few patents in the field of anti aging and urology more specifically, in the field of Erectile Dysfunction (ED). He was one of the first employees in startups companies like Saifun Semiconductor, Mellanox Technologies (today Nvidia), Syneron, Founder of Endymed Medical (ENDY:tlv) acting as VP R&D and CTO. He is father of 3 boys, two of them are stundents  at Technion institute, and he is married to Ora a distinguished lawyer at Israel.

IoT For All Podcast
How the Smart City Landscape is Evolving | Trinity Mobility's Mithila Holla

IoT For All Podcast

Play Episode Listen Later Jun 9, 2022 25:45


Mithila and Ryan open up the podcast with an introduction to Trinity Mobility, its role in the industry, and its applications. Mithila then discusses how she has seen the smart city industry evolve. She and Ryan wrap up the podcast with a high-level conversation around challenges in smart cities and what to expect from Trinity Mobility in the near future.Mithila Holla is the General Manager for Product Management and Marketing at Trinity Mobility, an applied IoT and AI company based in India. With over ten years of experience in multiple domains, she heads the Product Management, Ux Design, Business Analysis, and Marketing functions at Trinity. She works closely with smart city industry experts to streamline and improve smart city project deliveries globally. Mithila is responsible for the products and solutions that the company offers across six segments – Smart Cities, Safe Cities, Emergency Response Management, Smart Communities, Utilities, Early Warning, and Disaster Management. She holds a Master's degree in VLSI from Manipal University and a Master's degree in Business Management from the European School of Technology, Berlin. She was recently recognized as one of the 10 Most Impactful Women in Technology 2021 by Analytics Insight, in their global annual listing.

Anticipating The Unintended
#168 The (W)heat Is On

Anticipating The Unintended

Play Episode Listen Later May 15, 2022 25:36


India Policy Watch #1: Silver Linings PlaybookInsights on burning policy issues in India- RSJHello Readers! We are back after a nice, little break. Things have changed a bit in the past four weeks; haven’t they? Stock markets around the world are down by 10 per cent. Central banks have hiked interest rates by 40-50 bps. Inflation in the US is running at a 40-year high of over eight per cent. Retail inflation in India at 7.79 per cent is beyond the zone of comfort for RBI. The Rupee hit an all-time low last week. Technology stocks are down all over the world and the deal pipeline in the digital startup space has all but dried up. Musk is buying Twitter or maybe he isn’t. NFTs are dead and buried and crypto valuations are in a funk (or in a death spiral as Matt Levine puts it). And no one really knows anything about the Ukraine war. It drags on like a Putin-shaped monster waddling its way through the spring rasputitsa with no hope of getting anywhere. Such then are the vagaries of the world.  Thankfully, the Indian news channels were debating more important issues during these times. Like loudspeaker bans in religious institutions. Like a videographic survey of a Varanasi mosque by a 52-member team. Or if the Taj Mahal was indeed a Shiva temple called Tejo Mahalaya before Shah Jahan, that undisputed emperor of large tracts of Hindustan in the 17th century, figured he had run out of land to build a mausoleum in memory of his late wife. Or if Sri Lanka was more in the dumps than us in these times. These things matter. So they must be investigated by the intrepid reporters sitting in the studios. We must be forever thankful for the oasis of assurance that Indian news channels offer to us in this volatile and uncertain world.Anyway, coming back to the point, things have turned for the worse since we last wrote on these pages. Yet, amidst all the talks of a recession or stagflation, I believe there’s some kind of silver lining for India if it plays this situation well. Sweet Are the Uses Of InflationFirst, let’s look at the inflation and the rising interest rate scenario. India didn’t go down the path of expanding the fisc by doling out cash incentives at the peak of Covid induced distress in the economy. Much of the “20-lakh crores package” that was announced in May 2020 was either repurposing of the existing schemes or putting a monetary value on loans, subsidies or free food that was offered to the people. So, while the RBI cut rates and increased liquidity, the supply of money in the system and the government balance sheet didn’t bloat like those in the US and other western economies. The upside of the US model was that the economy rebounded faster, it began running at almost full employment and people who got Covid relief checks started spending as the economy opened up. The downside is an overheated economy that now needs to be cooled down but that comes with its consequences. The war in Ukraine and the resultant rise in oil and commodity prices have queered the pitch. So, for the first time in a long, long time the Fed has had to raise rates while the markets are falling. In 2022, the global rich have lost over a trillion dollars already as markets have fallen while the poor have had their wage increases outpace inflation. There’s less K-shaped recovery discussions in the US happening today. Anyway, these are new scenarios for an entire generation raised on rising stocks, low inflation and low-interest rates. This will be a hard landing for them. In the model that India followed, on the other hand, there was real distress in the rural and informal economy because of the absence of a direct cash transfer scheme during the pandemic. As the economy opened up, there were supply chain disruptions that hurt multiple sectors. The rise in oil prices because of the war added to the inflation. But there is an important difference between our inflation and that of the West. It is more a supply-side issue for us. A few rate hikes, some stability in oil and commodity prices and our continued diplomatic balancing act that will help us with cheap oil from Russia should stabilise things. We could be looking at a transitory elevated inflation for a few quarters rather than something more structural. Also, we have a much greater headroom to control inflation by raising rates. The repo is at 4.4 per cent after the out-of-turn increase by RBI last week. It is useful to remember as late as mid-2018, the repo was at 6.5 per cent and that didn’t look like a very high rate at that time. So, the RBI has another 150-200 bps of flex to tame inflation without seriously hurting growth, unlike the West. And that is only if the inflation prints get into the teens. That looks unlikely. So, what’s in store for us? We will continue with the trends we have seen so far: a K-shaped recovery that will hurt the poor more, the formalisation of the economy will mean the published numbers of GST collections or income tax mop-up will be buoyant, the food subsidy and other schemes started during the pandemic will continue for foreseeable future and we might get away managing to keep inflation down without killing growth.Second, the structural inflation in the western economies will mean they will have to take a couple of long-term calls. The discussion on the first of these calls is already on with an urgency markedly different from what it was six months back. How to reduce the dependence on oil and gas that support authoritarian regimes around the world? This shift away from the middle-east and Russia for energy is now an irreversible trend. Expect a rethink on nuclear energy and acceleration on the adoption of EVs. The other call is who should we back to replace China as the source of cheap goods and services to the west? The low inflation that the west has been used to over the past two decades is in large part on account of China’s integration into global trade. Now China wants to move up the value chain. Worse, it wants to replace the US as the dominant superpower. Continuing to strengthen China economically is no longer an option. China has done its cause no favour by being a bully around its neighbourhood (we know it first hand), being a terrible friend (ask Sri Lanka) and creating an axis with Russia and other authoritarian regimes around the world. There’s no going back to a working relationship with China of the kind that was prevailing before the pandemic. It is unreliable and it won’t turn into an open, democratic society with rising prosperity as was expected. It is difficult to see beyond India in filling that China-shaped void if the west were to search for continued low costs. Vietnam, Bangladesh and others could be alternatives but they lack the scale for the kind of shift that the west wants to make. The inflation pressure means the west doesn’t have time. India has an opportunity here. And I’m more sanguine about this because even if India shoots itself in foot like it is wont to, the way the die is cast it will still get the benefits of this shift. This is a window available for India even if it were to do its best (or worst) in distracting itself with useless, self-defeating issues. Lastly, there are some unintended consequences of a moderately high single-digit inflation for India. This is a government that likes to be fiscally prudent. It didn’t go down the path of ‘printing money’ during the pandemic because it cared about the debt to GDP ratio and the likely censure and downgrades from the global rating agencies. But it is also a government that likes welfarism. Welfarism + Hindutva + Nationalism is the trident it has used to power its electoral fortunes. A rate of inflation that’s higher than government bond yields will pare down the debt to GDP ratio and allow it to fund more welfare schemes. And that’s not a bad idea too considering there’s evidence that things might not be great in the informal economy. That apart, the inflation in food prices because of supply chain disruptions, increase in MSP and the war in Ukraine is good news for the rural economy. After a long time, the WPI food inflation is trending above CPI which means farmers are getting the upside of higher food prices. I guess no one will grudge them this phase however short-lived it might be. Well, I’m not often optimistic on these pages. But the way the stars have aligned themselves, India does have an opportunity to revive its economy in a manner that can sustain itself for long. The question is will it work hard and make the most of it? Or, is it happy being lulled into false glories and imaginary victimhood from the past that its news channels peddle every day?A Framework a Week: Errors of omission and commission — how VLSI relates to subsidiesTools for thinking public policy— Pranay Kotasthane(This article is an updated version of my 2014 essay on nationalinterest.in)The fundamental idea of any testing is to prevent a faulty product from reaching the end consumer. A well-designed test is one that accurately identifies all types of defects in the product. Very often though, this is not possible as tests may not cover the exact range of defects that might actually exist. In that case, the suite of tests leads to errors of commission or omissions. The interesting question, then is — which of the two errors is acceptable?An illustrationThis second problem can be explained using a fairly simple scenario from “Design-for-Testability” theory used in all integrated chip (IC) manufacturing companies. Consider a firm that makes the processor chips going into your laptops. Every single processor chip goes through a set of tests to identify if the chip is good or bad. Four scenarios result from this exercise:The two scenarios marked in green are the best-case scenarios. In the first of these, all the designed tests are unable to find any fault with the chip. At the same time, the chip itself does not show any defects after reaching the end consumers. When such awesomely functional chips reach your laptops, the chip-making companies make profits.In the second “green” scenario, the tests indicate that there is a problem with the chip. Further debugging (which involves greater costs) concludes that this chip is actually manufactured erroneously. It is then the raison d’être of the tests to throw away these chips so that they do not reach the customers.However, when tests are unable to identify any problem with the chip even though it is bad, we end up in the second choice problem 1 scenario or the “error of commission”. This is the scenario you encounter when your laptop crashes within a few days/weeks/years (within the guarantee period) after purchase. Obviously, this makes the consumer lose trust in the product and dents the manufacturing firm’s image.On the other hand, there is the second choice problem 2, where tests are designed so thoroughly that they start eliminating chips which are actually not dysfunctional. This is the Error of Omission. The cost involved with this error is that it leads to a loss of revenue as many good chips are just thrown away based on faulty tests. It also lowers the confidence of the firm.The above illustration shows the two errors that are commonly encountered in the chip manufacturing business. Which of them is tolerable is a function of the company’s image in the market, the end application of the product and the costs involved. For example, if the chip is being manufactured for use in mission-critical automobile systems like auto-braking or fuel injection, the preferable error is the error of omission as there’s a life and personal safety at stake. On the other hand, if the end application is a low-end mobile phone, the company might settle for a higher error of commission and avoid the extra costs of rejecting lots of chips.Application — SubsidiesThe above illustration can directly be applied to a subsidy case to explain the effect of identifying beneficiaries incorrectly. Using the framework above, we can visualise a subsidy program as shown in the figure below:From the framework above, which would be your second choice? The first option would be to start with very few beneficiaries being fully aware that there will be a definite Error of Omission. The next step would be to work on reducing this error rate itself. The problem here will be that there might be some people who, even though needy are not attended to urgently.Another option would be to start with a large number of beneficiaries being aware of the errors of commission. A subsequent step would be to try and reduce this error rate. The costs involved here are that the free-riders might sideline the really needy. Such schemes will also require huge sums of capital as they will start by serving a huge number of people. This is the path that most government subsidies follow in India. A digital identity project like Aadhaar plays a role right here — it can reduce the errors of commission.If you were to design a subsidy scheme, which would be your second choice scenario? Thinking about second choices is generally useful in public policy as the first-choice option is often unavailable. The art of policymaking lies in picking a second-best option that makes most people better off. India Policy Watch #2: Samaaj Ke Dushman Insights on burning policy issues in India- RSJHere is a quote for you to ponder over:All these theoretical difficulties are avoided if one abandons the question “Who should rule?” and replaces it by the new and practical problem: how can we best avoid situations in which a bad ruler causes too much harm? When we say that the best solution known to us is a constitution that allows a majority vote to dismiss the government, then we do not say the majority vote will always be right. We do not even say that it will usually be right. We say only that this very imperfect procedure is the best so far invented. Winston Churchill once said, jokingly, that democracy is the worst form of government—with the exception of all other known forms of government.Sounds relevant to our times?Over the past month, I have been reading Karl Popper’s “Open Society and Its Enemies”. It is a wonderful book written during WW2 when open and democratic societies were facing their most difficult test yet. The key question Popper is interested in is how do we avoid democracies falling into the trap of turning themselves inwards and giving into a majoritarian system of governance. Seems like as relevant a question as it was during the time of his writing. While reading the book, I chanced upon a most amazing essay written by Popper himself about his book in The Economist in 1988. Reading it three decades later, it is remarkable how accurate he is in first framing the core question of a democracy right and then looking for solutions that can be tested with scientific rigour. I have produced excerpts from that essay below:In “The Open Society and its Enemies” I suggested that an entirely new problem should be recognised as the fundamental problem of a rational political theory. The new problem, as distinct from the old “Who should rule?”, can be formulated as follows: how is the state to be constituted so that bad rulers can be got rid of without bloodshed, without violence?This, in contrast to the old question, is a thoroughly practical, almost technical, problem. And the modern so-called democracies are all good examples of practical solutions to this problem, even though they were not consciously designed with this problem in mind. For they all adopt what is the simplest solution to the new problem—that is, the principle that the government can be dismissed by a majority vote…My theory easily avoids the paradoxes and difficulties of the old theory—for instance, such problems as “What has to be done if ever the people vote to establish a dictatorship?” Of course, this is not likely to happen if the vote is free. But it has happened. And what if it does happen? Most constitutions in fact require far more than a majority vote to amend or change constitutional provisions, and thus would demand perhaps a two-thirds or even a three-quarters (“qualified”) majority for a vote against democracy. But this demand shows that they provide for such a change; and at the same time they do not conform to the principle that the (“unqualified”) majority will is the ultimate source of power—that the people, through a majority vote, are entitled to rule.Popper’s answer is the two-party system. The Congress is busy with its chintan shivir as we speak and I read Popper with bemusement when he wrote on the merits of a two-party system.The two-party systemIn order to make a majority government probable, we need something approaching a two-party system, as in Britain and in the United States. Since the existence of the practice of proportional representation makes such a possibility hard to attain, I suggest that, in the interest of parliamentary responsibility, we should resist the perhaps-tempting idea that democracy demands proportional representation. Instead, we should strive for a two-party system, or at least for an approximation to it, for such a system encourages a continual process of self-criticism by the two parties.Such a view will, however, provoke frequently voiced objections to the two-party system that merit examination: “A two-party system represses the formation of other parties.” This is correct. But considerable changes are apparent within the two major parties in Britain as well as in the United States. So the repression need not be a denial of flexibility.The point is that in a two-party system the defeated party is liable to take an electoral defeat seriously. So it may look for an internal reform of its aims, which is an ideological reform. If the party is defeated twice in succession, or even three times, the search for new ideas may become frantic, which obviously is a healthy development. This is likely to happen, even if the loss of votes was not very great.Under a system with many parties, and with coalitions, this is not likely to happen. Especially when the loss of votes is small, both the party bosses and the electorate are inclined to take the change quietly. They regard it as part of the game—since none of the parties had clear responsibilities. A democracy needs parties that are more sensitive than that and, if possible, constantly on the alert. Only in this way can they be induced to be self-critical. As things stand, an inclination to self-criticism after an electoral defeat is far more pronounced in countries with a two-party system than in those where there are several parties. In practice, then, a two-party system is likely to be more flexible than a multi-party system, contrary to first impressions.  PolicyWTF: The Wheat Ban Photo OpThis section looks at egregious public policies. Policies that make you go: WTF, Did that really happen?— Pranay KotasthaneThe Directorate General of Foreign Trade (DGFT) has banned wheat exports from India with immediate effect. For an expert’s view on why this ban is a policyWTF, read Ashok Gulati and Sachit Gupta’s take in the Indian Express. The article lists three less-worse options that the government chose to ignore, and opted for this rather extreme step instead. From a broader public policy perspective, there are three points to learn from this PolicyWTF. One, it reflects the perilously increasing scope of what’s classified as “strategic”. Once an item gets that tag, a fundamental concept behind international trade that “only individuals trade, countries don’t”, gets defenestrated. Here’s why I think this “strategic” line of thinking is the real reason behind this policyWTF. Until 11th May, the message from the government was that it has procured sufficient stocks of wheat and that there is no plan for an outright ban on exports. The PM in his recent visit to Germany even proclaimed that Indian farmers have “stepped forward to feed the world" even as many countries grapple with wheat shortages. There were reports that the government might consider an export tax on wheat; a ban wasn’t on the cards. A May 14 Business Standard report cited an unnamed senior official thus:“We have worked on four-five policy measures to curb this unabated flow of wheat from India. A final decision on this is yet to be taken. We are waiting for approval from the Prime Minister’s Office (PMO).”So it seems that it was the PMO that opted for this extreme step. Why, you ask? The reason perhaps lies in the exceptions to the ban. The government plans to permit wheat exports bilaterally, on the request of specific countries. In one fell swoop, every bag of wheat being exported by an Indian farmer now becomes an economic diplomacy photo-op for the government. While this may seem like a masterstroke for the government, this ‘strategy-fication’ comes at the immense expense of farmers and traders. While they will not be able to cash in on the immediate opportunities, they might also receive smaller if not fewer international orders from international buyers in the future.The second lesson from this PolicyWTF is for the farmers. While this particular ban will undoubtedly hurt the farmers and traders, its origins lie in the now-normalised intervention of governments in all aspects of agriculture. In that sense, Minimum Support Prices (MSP) and the wheat ban are two sides of the same coin. A government that giveth will also taketh at whim. The push for making MSP a law is likely to invite more such export bans from the government, in the name of consumer interest. Observe the ease with which the State can take away economic freedoms in this statement by the Commerce Secretary:"So, what is the purpose of this order. What it is doing is in the name of prohibition - we are directing the wheat trade in a certain direction. We do not want the wheat to go in an irregulated manner to places where it might get just hoarded or where it may not be used to the purpose which we are hoping it would be used for”.The third policy lesson is the need to lift the bans on genetically edited crops. The ostensible reason for this ban was a decrease in production due to the heatwave in large parts of India. Assuming that climate change will lead to many more instances of crop failures, crops engineered to withstand higher temperatures are an important part of the solution. In the wake of the ongoing wheat shortage, there are signs that the regulatory environment is changing in a few countries. Australia and New Zealand approved a drought-tolerant Argentinean wheat variety for human consumption last week. Many countries now classify gene edited crops that do not use DNA from a different organism, as non-GMO. Indian regulators hopefully too will move in the same direction.HomeWorkReading and listening recommendations on public policy matters[Post] The Indian ‘sedition’ law was in the news last week. We had a conceptual take on sedition in edition #115 that puts the current events in context.[Podcast] What’s it like to grow, operate, and sell a manufacturing firm in India? That’s the theme of the latest Puliyabaazi with Hema Hattangady.[Book] Lithium batteries are all the rage. For understanding the politics and the geopolitics of these batteries, read Lukacz Bednarski’s succinct Lithium: The Global Race for Battery Dominance and the New Energy Revolution. For a short introduction to battery failure accidents in India, here’s a nice primer by Saurabh Chandra on Puliyabaazi. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit publicpolicy.substack.com

From Research to Reality: The Hewlett Packard Labs Podcast
Driving Innovation at Hewlett Packard Labs

From Research to Reality: The Hewlett Packard Labs Podcast

Play Episode Listen Later Feb 16, 2022 28:04


In this week's episode of the Hewlett Packard Labs Podcast “From Research to Reality”, Dejan Milojicic hosts Andrew Wheeler, Fellow and VP, Director of Hewlett Packard Labs. Andrew discusses his technical background and how he became director of Hewlett Packard Labs. He explains the roles of Labs and how they evolved over years. He then presents his vision for the Labs as well as the key programs. He focuses on technology transfer to business and contributions to industry verticals. Andrew also talks about his origins and how he practices outdoor activities.

From Research to Reality: The Hewlett Packard Labs Podcast
Upcoming Hewlett Packard Labs Podcast: Driving Innovation at Hewlett Packard Labs

From Research to Reality: The Hewlett Packard Labs Podcast

Play Episode Listen Later Jan 20, 2022 1:18


In next week's episode of the Hewlett Packard Labs Podcast “From Research to Reality”, Dejan Milojicic hosts Andrew Wheeler, Fellow and VP, Director of Hewlett Packard Labs. Andrew discusses his technical background and how he became director of Hewlett Packard Labs. He explains the roles of Labs and how they evolved over years. He then presents his vision for the Labs as well as the key programs. He focuses on technology transfer to business and contributions to industry verticals. Andrew also talks about his origins and how he practices outdoor activities.

The CoCo Crew Podcast
Episode 79 -- Why Pay Someone Else? ; VLSI Primer ; Review of Blockdown

The CoCo Crew Podcast

Play Episode Listen Later Dec 24, 2021


Episode 79 Show Notes -- http://cococrew.org/cococrew-podcast-79.html

The Nonlinear Library: Alignment Forum Top Posts
Measuring hardware overhang by hippke

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 5, 2021 8:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Measuring hardware overhang, published by hippke on the AI Alignment Forum. Measuring hardware overhang Summary How can we measure a potential AI or hardware overhang? For the problem of chess, modern algorithms gained two orders of magnitude in compute (or ten years in time) compared to older versions. While it took the supercomputer "Deep Blue" to win over world champion Gary Kasparov in 1997, today's Stockfish program achieves the same ELO level on a 486-DX4-100 MHz from 1994. In contrast, the scaling of neural network chess algorithms to slower hardware is worse (and more difficult to implement) compared to classical algorithms. Similarly, future algorithms will likely be able to better leverage today's hardware by 2-3 orders of magnitude. I would be interested in extending this scaling relation to AI problems other than chess to check its universality. Introduction Hardware overhang is a situation where sufficient compute is available, but the algorithms are suboptimal. It is relevant if we build AGI with large initial build, but cheaper run costs. Once built, the AGI might run on many comparably slow machines. That's a hardware overhang with a risk of exponential speed-up. This asymmetry exists for current neural networks: Creating them requires orders of magnitude more compute than running them. On the other hand, in The Bitter Lesson by Rich Sutton it is argued that the increase in computation is much more important (orders of magnitude) than clever algorithms (factor of two or less). In the following, I will examine the current state of the algorithm-art using chess as an example. The example of chess One of the most well-researched AI topics is chess. It has a long history of algorithms going back to a program on the 1956 MANIAC. It is comparatively easy to measure the quality of a player by its ELO score. As an instructive example, we examine the most symbolic event in computer chess. In 1997, the IBM supercomputer "Deep Blue" defeated the reigning world chess champion under tournament conditions. The win was taken as a sign that artificial intelligence was catching up to human intelligence. By today's standards, Deep Blue used simple algorithms. Its strength came from computing power. It was a RS/6000-based system with 30 nodes, each with a 120 MHz CPU plus 480 special purpose VLSI chess chips. For comparison, a common computer at the time was the Intel Pentium II at 300 MHz. Method: An experiment using a 2020 chess engine We may wonder: How do modern (better) chess algorithms perform on slower hardware? I tested this with Stockfish version 8 (SF8), one of the strongest classical chess engine. I simulated 10k matches of SF8 against slower versions of itself and a series of older engines for calibration, using cutechess-cli. In these benchmarks, I varied the total number of nodes to be searched during each game. I kept the RAM constant (this may be unrealistic for very old machines, see below). By assuming a fixed thinking time per game, the experiments scale out to slower machines. By cross-correlating various old benchmarks of Stockfish and other engines on older machines, I matched these ratings to units of MIPS; and finally, MIPS approximately to the calendar year. Depending on the actual release dates of the processors, the year axis has a jitter up to 2 years. I estimate the error for the compute estimates to be perhaps 20%, and certainly less than 50%. As we will see, the results measure in orders of magnitude, so that these errors are small in comparison (

Harshaneeyam
త్రిపుర గారి గురించి డాక్టర్ మూలా సుబ్రహ్మణ్యం.

Harshaneeyam

Play Episode Listen Later Sep 5, 2021 23:00


ఈ ఎపిసోడ్లో శ్రీ మూలా సుబ్రహ్మణ్యం గారు, ప్రసిద్ధ రచయిత త్రిపుర గారి గురించి మాట్లాడతారు. శ్రీ సుబ్రహ్మణ్యం VLSI ఆర్కిటెక్చర్ లో డాక్టరేట్ తీసుకుని, ప్రస్తుతం పాలక్కాడ్ ఐఐటీ లో పని చేస్తున్నారు. స్వతహాగా, చక్కటి కవి రచయిత . ,'ఆత్మనొక దివ్వెగా' నవల , 'సెలయేటి సవ్వడి' కవితా సంపుటి వీరి ప్రసిద్ధ రచనలు. సుబ్రహ్మణ్యం గారికి హర్షణీయం కృతజ్ఞతలు. This podcast uses the following third-party services for analysis: Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp Chartable - https://chartable.com/privacy

Software Lifecycle Stories
Genesis of a company with Sanjay Jayakumar

Software Lifecycle Stories

Play Episode Listen Later Aug 13, 2021 42:46


Sanjay Jayakumar CEO and Founder of Ignitarium shares his stories about - Facing uncertainties from very early on, being good in math and physicsJoining the WIPRO factory from campus in 1991, and being with WIPRO for 21 yearsPhases in WIPRO of roughly 3 years each across various functionsEarly career in engineering, as the interface between design and production of computersMoving to the hardware design team and then onto chip design as part of outsourced R&DSetting up an SOC for Texas InstrumentsGetting an opportunity to setup an engineering centre at Kochi and scaling it to a 2000 person operationThe lead into starting IgnitariumUnderlying common principles when working across cultures in Japan and the USLaying the foundation for scaling the engineering centre at Kochi with excellent support at WiproThe transition to becoming an entrepreneur, starting with making peace within himself with the decisionThe long road, 200 calls and the first 3 customers, getting the team through personal and professional networks, the genesis of IgnitariumAdding value to customers and being key enablers in a product development ecosystem through their expertise in system on chips & signal processing intensive design to create a software enabled platformA Future forward thinking approach to continuous evolution of products and solutionsCulture of Innovation, continuous learning and growth opportunities Creating partnerships and contributing to ecosystemsMessages for aspiring engineers in today's world of content at your fingertipsFounder and CEO of Ignitarium, Sanjay Jayakumar proudly leads a cross-functional team of close to 300 professionals and has been responsible for defining Ignitarium's core values, which encompass the organisation's approach towards clients, partners and internal stakeholders, and in establishing an innovation and value-driven organisational culture. Largely admired for his charisma and humility, he has gained a reputation for inspiring people through his strategic vision and team building capabilities.Prior to founding Ignitarium in 2012, Sanjay spent the initial 22 years of his career with the VLSI and Systems Business unit at Wipro Technologies. In his formative years, Sanjay worked in diverse engineering roles in Electronic hardware design, ASIC design and custom library development. Sanjay later handled a flagship – multi-million dollar, 600-engineer strong – Semiconductor & Embedded account owning complete Delivery and Business responsibility.Contact him @ Linkdin/Sanjay Jaykumar

ASK Raghulan Tamil
Robotics | VLSI Career | Salary | Coding | VLSI vs Embedded Systems | ASIC Crypto Miner | IoT | AI

ASK Raghulan Tamil

Play Episode Listen Later Aug 6, 2021 20:34


Robotics | VLSI Career | Salary | Coding | VLSI vs Embedded Systems | ASIC Crypto Miner | IoT | AI

Zero to ASIC Course
Tom Spyrou - OpenROAD development, aim, roadmap and integration with OpenLANE

Zero to ASIC Course

Play Episode Listen Later Aug 2, 2021 24:42


Tom Spyrou is a long time EDA developer who has worked at large and small companies. In 1988 Developed QTV at VLSI technology. It was the first STA engine to be trusted to sign off devices for fabrication without timing based simulation.He was the original architect of PrimeTime STA algorithm.Manager of Cadence Common timing Engine and precursor to Open AccessSenior technical positions at Synopsys, Cadence, Simplex, AMD, Altera and Intel. Since 2019 he has been the Chief Architect and Technical Project Manager of OpenROAD since 2019. We have just heard the announcement that OpenLANE will become the defacto flow for OpenROAD. So I felt very lucky that he was willing to spend half an hour talking to us about a wide range of topics including: OpenROAD history & contextWhat is the aim of OpenROADState of the art in industry vs researchTraining the next generation of EDA developersShortage of EDA developersUse of other PDKs besides Sky130Value of open source PDKsRoadmap and improvements: PPAMachine learning additions for OpenROADCloud scale compute for OpenROADOpenLANE and OpenROAD collaboration announcementFuture hopes

Formal bytes: The Axiomise Podcast Channel
Episode 44: Formal Verification 101 - The power of formal is now in your hands

Formal bytes: The Axiomise Podcast Channel

Play Episode Listen Later Apr 13, 2021 5:34


This week we discuss our new formal verification course launched on 6 April, last week. If you're looking to understand how to apply formal methods, especially for industrial projects in VLSI, then we have something for you.

The History of Computing
Connections: ARPA > RISC > ARM > Apple's M1

The History of Computing

Play Episode Listen Later Jan 17, 2021 14:55


Let's oversimplify something in the computing world. Which is what you have to do when writing about history. You have to put your blinders on so you can get to the heart of a given topic without overcomplicating the story being told. And in the evolution of technology we can't mention all of the advances that lead to each subsequent evolution. It's wonderful and frustrating all at the same time. And that value judgement of what goes in and what doesn't can be tough.  Let's start with the fact that there are two main types of processors in our devices. There's the x86 chipset developed by Intel and AMD and then there's the RISC-based processors, which are ARM and for the old school people, also include PowerPC and SPARC. Today we're going to set aside the x86 chipset that was dominant for so long and focus on how the RISC and so ARM family emerged.    First, let's think about what the main difference is between ARM and x86. RISC and so ARM chips have a focus on reducing the number of instructions required to perform a task to as few as possible, and so RISC stands for Reduced Instruction Set Computing. Intel, other than the Atom series chips, with the x86 chips has focused on high performance and high throughput. Big and fast, no matter how much power and cooling is necessary.  The ARM processor requires simpler instructions which means there's less logic and so more instructions are required to perform certain logical operations. This increases memory and can increase the amount of time to complete an execution, which ARM developers address with techniques like pipelining, or instruction-level parallelism on a processor. Seymour Cray came up with this to split up instructions so each core or processor handles a different one and so Star, Amdahl and then ARM implemented it as well.  The X86 chips are Complex Instruction Set Computing chips, or CISC. Those will do larger, more complicated tasks, like computing floating point integers or memory searches, on the chip. That often requires more consistent and larger amounts of power. ARM chips are built for low power. The reduced complexity of operations is one reason but also it's in the design philosophy. This means less heat syncs and often accounting for less consistent streams of power. This 130 watt x86 vs 5 watt ARM can mean slightly lower clock speeds but the chips can cost more as people will spend less in heat syncs and power supplies. This also makes the ARM excellent for mobile devices.  The inexpensive MOS 6502 chips helped revolutionize the personal computing industry in 1975, finding their way into the Apple II and a number of early computers. They were RISC-like but CISC-like as well. They took some of the instruction set architecture family from the IBM System/360 through to the PDP, General Nova, Intel 8080, Zylog, and so after the emergence of Windows, the Intel finally captured the personal computing market and the x86 flourished.  But the RISC architecture actually goes back to the ACE, developed in 1946 by Alan Turing. It wasn't until the 1970s that Carver Mead from Caltech and Lynn Conway from Xerox PARC saw that the number of transistors was going to plateau on chips while workloads on chips were growing exponentially. ARPA and other agencies needed more and more instructions, so they instigated what we now refer to as the VLSI project, a DARPA program initiated by Bob Kahn to push into the 32-bit world. They would provide funding to different universities, including Stanford and the University of North Carolina.  Out of those projects, we saw the Geometry Engine, which led to a number of computer aided design, or CAD efforts, to aid in chip design. Those workstations, when linked together, evolved into tools used on the Stanford University Network, or SUN, which would effectively spin out of Stanford as Sun Microsystems. And across the bay at Berkeley we got a standardized Unix implementation that could use the tools being developed in Berkely Software Distribution, or BSD, which would eventually become the operating system used by Sun, SGI, and now OpenBSD and other variants.  And the efforts from the VLSI project led to Berkely RISC in 1980 and Stanford MIPS as well as the multi chip wafer.The leader of that Berkeley RISC project was David Patterson who still serves as vice chair of the RISC-V Foundation. The chips would add more and more registers but with less specializations. This led to the need for more memory. But UC Berkeley students shipped a faster ship than was otherwise on the market in 1981. And the RISC II was usually double or triple the speed of the Motorola 68000.  That led to the Sun SPARC and DEC Alpha. There was another company paying attention to what was happening in the RISC project: Acorn Computers. They had been looking into using the 6502 processor until they came across the scholarly works coming out of Berkeley about their RISC project. Sophie Wilson and Steve Furber from Acorn then got to work building an instruction set for the Acorn RISC Machine, or ARM for short. They had the first ARM working by 1985, which they used to build the Acorn Archimedes. The ARM2 would be faster than the Intel 80286 and by 1990, Apple was looking for a chip for the Apple Newton. A new company called Advanced RISC Machines or Arm would be founded, and from there they grew, with Apple being a shareholder through the 90s. By 1992, they were up to the ARM6 and the ARM610 was used for the Newton. DEC licensed the ARM architecture to develop the StrongARMSelling chips to other companies. Acorn would be broken up in 1998 and parts sold off, but ARM would live on until acquired by Softbank for $32 billion in 2016. Softbank is  currently in acquisition talks to sell ARM to Nvidia for $40 billion.  Meanwhile, John Cocke at IBM had been working on the RISC concepts since 1975 for embedded systems and by 1982 moved on to start developing their own 32-bit RISC chips. This led to the POWER instruction set which they shipped in 1990 as the RISC System/6000, or as we called them at the time, the RS/6000. They scaled that down to the Power PC and in 1991 forged an alliance with Motorola and Apple. DEC designed the Alpha. It seemed as though the computer industry was Microsoft and Intel vs the rest of the world, using a RISC architecture. But by 2004 the alliance between Apple, Motorola, and IBM began to unravel and by 2006 Apple moved the Mac to an Intel processor. But something was changing in computing. Apple shipped the iPod back in 2001, effectively ushering in the era of mobile devices. By 2007, Apple released the first iPhone, which shipped with a Samsung ARM.  You see, the interesting thing about ARM is they don't fab chips, like Intel - they license technology and designs. Apple licensed the Cortex-A8 from ARM for the iPhone 3GS by 2009 but had an ambitious lineup of tablets and phones in the pipeline. And so in 2010 did something new: they made their own system on a chip, or SoC. Continuing to license some ARM technology, Apple pushed on, getting between 800MHz to 1 GHz out of the chip and using it to power the iPhone 4, the first iPad, and the long overdue second-generation Apple TV. The next year came the A5, used in the iPad 2 and first iPad Mini, then the A6 at 1.3 GHz for the iPhone 5, the A7 for the iPhone 5s, iPad Air. That was the first 64-bit consumer SoC. In 2014, Apple released the A8 processor for the iPhone 6, which came in speeds ranging from 1.1GHz to the 1.5 GHz chip in the 4th generation Apple TV. By 2015, Apple was up to the A9, which clocked in at 1.85 GHz for the iPhone 6s. Then we got the A10 in 2016, the A11 in 2017, the A12 in 2018, A13 in 2019, A14 in 2020 with neural engines, 4 GPUs, and 11.8 billion transistors compared to the 30,000 in the original ARM.  And it's not just Apple. Samsung has been on a similar tear, firing up the Exynos line in 2011 and continuing to license the ARM up to Cortex-A55 with similar features to the Apple chips, namely used on the Samsung Galaxy A21. And the Snapdragon. And the Broadcoms.  In fact, the Broadcom SoC was used in the Raspberry Pi (developed in association with Broadcom) in 2012. The 5 models of the Pi helped bring on a mobile and IoT revolution.  And so nearly every mobile device now ships with an ARM chip as do many a device we place around our homes so our digital assistants can help run our lives. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. And the number is about to grow even more rapidly. Apple surprised many by announcing they were leaving Intel to design their own chips for the Mac.  Given that the PowerPC chips were RISC, the ARM chips in the mobile devices are RISC, and the history Apple has with the platform, it's no surprise that Apple is going back that direction with the M1, Apple's first system on a chip for a Mac. And the new MacBook Pro screams. Even software running in Rosetta 2 on my M1 MacBook is faster than on my Intel MacBook. And at 16 billion transistors, with an 8 core GPU and a 16 core neural engine, I'm sure developers are hard at work developing the M3 on these new devices (since you know, I assume the M2 is done by now). What's crazy is, I haven't felt like Intel had a competitor other than AMD in the CPU space since Apple switched from the PowerPC. Actually, those weren't great days. I haven't felt that way since I realized no one but me had a DEC Alpha or when I took the SPARC off my desk so I could play Civilization finally.  And this revolution has been a constant stream of evolutions, 40 years in the making. It started with an ARPA grant, but various evolutions from there died out. And so really, it all started with Sophie Wilson. She helped give us the BBC Micro and the ARM. She was part of the move to Element 14 from Acorn Computers and then ended up at Broadcom when they bought the company in 2000 and continues to act as the Director of IC Design. We can definitely thank ARPA for sprinkling funds around prominent universities to get us past 10,000 transistors on a chip. Given that chips continue to proceed at such a lightning pace, I can't imagine where we'll be at in another 40 years. But we owe her (and her coworkers at Acorn and the team at VLSI, now NXP Semiconductors) for their hard work and innovations.

The History of Computing
Bob Tayler: ARPA to PARC to DEC

The History of Computing

Play Episode Listen Later Jan 15, 2021 14:31


Robert Taylor was one of the true pioneers in computer science. In many ways, he is the string (or glue) that connected the US governments era of supporting computer science through ARPA to innovations that came out of Xerox PARC and then to the work done at Digital Equipment Corporation's Systems Research Center. Those are three critical aspects of the history of computing and while Taylor didn't write any of the innovative code or develop any of the tools that came out of those three research environments, he saw people and projects worth funding and made sure the brilliant scientists got what they needed to get things done. The 31 years in computing that his stops represented were some of the most formative years for the young computing industry and his ability to inspire the advances that began with Vannevar Bush's 1945 article called “As We May Think” then ended with the explosion of the Internet across personal computers.  Bob Taylor inherited a world where computing was waking up to large crusty but finally fully digitized mainframes stuck to its eyes in the morning and went to bed the year Corel bought WordPerfect because PCs needed applications, the year the Pentium 200 MHz was released, the year Palm Pilot and eBay were founded, the year AOL started to show articles from the New York Times, the year IBM opened a we web shopping mall and the year the Internet reached 36 million people. Excite and Yahoo went public. Sometimes big, sometimes small, all of these can be traced back to Bob Taylor - kinda' how we can trace all actors to Kevin Bacon. But more like if Kevin Bacon found talent and helped them get started, by paying them during the early years of their careers…  How did Taylor end up as the glue for the young and budding computing research industry? Going from tween to teenager during World War II, he went to Southern Methodist University in 1948, when he was 16. He jumped into the US Naval Reserves during the Korean War and then got his masters in psychology at the University of Texas at Austin using the GI Bill. Many of those pioneers in computing in the 60s went to school on the GI Bill. It was a big deal across every aspect of American life at the time - paving the way to home ownership, college educations, and new careers in the trades. From there, he bounced around, taking classes in whatever interested him, before taking a job at Martin Marietta, helping design the MGM-31 Pershing and ended up at NASA where he discovered the emerging computer industry.  Taylor was working on projects for the Apollo program when he met JCR Licklider, known as the Johnny Appleseed of computing. Lick, as his friends called him, had written an article called Man-Computer Symbiosis in 1960 and had laid out a plan for computing that influenced many. One such person, was Taylor. And so it was in 1962 he began and in 1965 that he succeeded in recruiting Taylor away from NASA to take his place running ARPAs Information Processing Techniques Office, or IPTO.  Taylor had funded Douglas Engelbart's research on computer interactivity at Stanford Research Institute while at NASA. He continued to do so when he got to ARPA and that project resulted in the invention of the computer mouse and the Mother of All Demos, one of the most inspirational moments and a turning point in the history of computing.  They also funded a project to develop an operating system called Multics. This would be a two million dollar project run by General Electric, MIT, and Bell Labs. Run through Project MAC at MIT there were just too many cooks in the kitchen. Later, some of those Bell Labs cats would just do their own thing. Ken Thompson had worked on Multics and took the best and worst into account when he wrote the first lines of Unix and the B programming language, then one of the most important languages of all time, C.  Interactive graphical computing and operating systems were great but IPTO, and so Bob Taylor and team, would fund straight out of the pentagon, the ability for one computer to process information on another computer. Which is to say they wanted to network computers. It took a few years, but eventually they brought in Larry Roberts, and by late 1968 they'd awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS would connect a number of sites and route traffic and the first one went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET, the commonly accepted precursor to the Internet.  There was another networking project going on at the time that was also getting funding from ARPA as well as the Air Force, PLATO out of the University of Illinois. PLATO was meant for teaching and had begun in 1960, but by then they were on version IV, running on a CDC Cyber and the time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network.  Then things get weird. Taylor is sent to Vietnam as a civilian, although his rank equivalent would be a brigadier general. He helped develop the Military Assistance Command in Vietnam. Battlefield operations and reporting were entering the computing era. Only problem is, while Taylor was a war veteran and had been deep in the defense research industry for his entire career, Vietnam was an incredibly unpopular war and seeing it first hand and getting pulled into the theater of war, had him ready to leave. This combined with interpersonal problems with Larry Roberts who was running the ARPA project by then over Taylor being his boss even without a PhD or direct research experience. And so Taylor joined a project ARPA had funded at the University of Utah and left ARPA.  There, he worked with Ivan Sutherland, who wrote Sketchpad and is known as the Father of Computer Graphics, until he got another offer. This time, from Xerox to go to their new Palo Alto Research Center, or PARC. One rising star in the computer research world was pretty against the idea of a centralized mainframe driven time sharing system. This was Alan Kay. In many ways, Kay was like Lick. And unlike the time sharing projects of the day, the Licklider and Kay inspiration was for dedicated cycles on processors. This meant personal computers.  The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Taylor was able to pick the best of the scientists he'd helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS.  This new Computer Science Laboratory landed people like Charles Thacker, David Boggs, Butler Lampson, and Bob Sproul and would develop the Xerox Alto, the inspiration for the Macintosh. The Alto though contributed the very ideas of overlapping windows, icons, menus, cut and paste, word processing. In fact, Charles Simonyi from PARC would work on Bravo before moving to Microsoft to spearhead Microsoft Word. Bob Metcalfe on that team was instrumental in developing Ethernet so workstations could communicate with ARPANET all over the growing campus-connected environments. Metcalfe would leave to form 3COM.  SuperPaint would be developed there and Alvy Ray Smith would go on to co-found Pixar, continuing the work begun by Richard Shoup.  They developed the Laser Printer, some of the ideas that ended up in TCP/IP, and the their research into page layout languages would end up with Chuck Geschke, John Warnock and others founding Adobe.  Kay would bring us the philosophy behind the DynaBook which decades later would effectively become the iPad. He would also develop Smalltalk with Dan Ingalls and Adele Goldberg, ushering in the era of object oriented programming.  They would do pioneering work on VLSI semiconductors, ubiquitous computing, and anything else to prepare the world to mass produce the technologies that ARPA had been spearheading for all those years. Xerox famously did not mass produce those technologies. And nor could they have cornered the market on all of them. The coming waves were far too big for one company alone.  And so it was that PARC, unable to bring the future to the masses fast enough to impact earnings per share, got a new director in 1983 and William Spencer was yet another of three bosses that Taylor clashed with. Some resented that he didn't have a PhD in a world where everyone else did. Others resented the close relationship he maintained with the teams. Either way, Taylor left PARC in 1983 and many of the scientists left with him.  It's both a curse and a blessing to learn more and more about our heroes. Taylor was one of the finest minds in the history of computing. His tenure at PARC certainly saw the a lot of innovation and one of the most innovative teams to have ever been assembled. But as many of us that have been put into a position of leadership, it's easy to get caught up in the politics. I am ashamed every time I look back and see examples of building political capital at the expense of a project or letting an interpersonal problem get in the way of the greater good for a team. But also, we're all human and the people that I've interviewed seem to match the accounts I've read in other books.  And so Taylor's final stop was Digital Equipment Corporation where he was hired to form their Systems Research Center in Palo Alto. They brought us the AltaVista search engine, the Firefly computer, Modula-3 and a few other advances. Taylor retired in 1996 and DEC was acquired by Compaq in 1998 and when they were acquired by HP the SRC would get merged with other labs at HP.  From ARPA to Xerox to Digital, Bob Taylor certainly left his mark on computing. He had a knack of seeing the forest through the trees and inspired engineering feats the world is still wrestling with how to bring to fruition. Raw, pure science. He died in 2017. He worked with some of the most brilliant people in the world at ARPA. He inspired passion, and sometimes drama in what Stanford's Donald Knuth called “the greatest by far team of computer scientists assembled in one organization.”  In his final email to his friends and former coworkers, he said “You did what they said could not be done, you created things that they could not see or imagine.” The Internet, the Personal Computer, the tech that would go on to become Microsoft Office, object oriented programming, laser printers, tablets, ubiquitous computing devices. So, he isn't exactly understating what they accomplished in a false sense of humility. I guess you can't do that often if you're going to inspire the way he did.  So feel free to abandon the pretense as well, and go inspire some innovation. Heck, who knows where the next wave will come from. But if we aren't working on it, it certainly won't come. Thank you so much and have a lovely, lovely day. We are so lucky to have you join us on yet another episode. 

Chad Cargill's ACT Test Prep
Episode 39: Interested in an Engineering Degree? Types of Engineering Explained

Chad Cargill's ACT Test Prep

Play Episode Listen Later Nov 5, 2020 23:08


Many students say they want to be an engineer, but what kind of engineer is the question. In this episode, I explain the main types of engineering degrees and the general purpose of each type. Types of Engineering Degrees Offered at Oklahoma State Aerospace What is aerospace engineering? Aerospace engineering is the study of the science and technology of flight, and the design of air, land and sea vehicles for transportation and exploration. Biosystems What is biosystems engineering? The study of biosystems engineering merges engineering and agricultural science to improve our quality of life while maintaining the environment and preserving our natural resources. Chemical What is chemical engineering? Chemical engineering is a discipline focused on conceiving and designing processes to produce, transform and transport materials — beginning with experimentation in the laboratory followed by implementation of the technology. Civil What is civil engineering? Civil Engineering is one of the oldest engineering disciplines with the focus on the built environment that encompasses much of what defines modern civilization: buildings, bridges, roads, etc. Computer What is computer engineering? Computer engineering encompasses a broad range of technologies that utilize digital devices for the benefit of society. Subdisciplines include digital electronics, VLSI chips, embedded controllers, networking, software development, memory and storage devices, cloud computing, internet-of-things, computer security, application-specific IC's, graphics processing units, and computer architecture. Electrical What is electrical engineering? Electrical Engineering encompasses a broad range of technologies that utilize electricity for the benefit of society. Subdisciplines include energy systems, machines, power electronics, analog electronics, instrumentation, sensors, signal processing, machine vision, communications, robotics, wireless devices, radar, photonics, biomedical devices, and artificial intelligence. Industrial What is industrial engineering? Industrial Engineering and Management (IEM) is an engineering discipline that focuses on designing, operating, managing, and continuously improving manufacturing and service systems so that they are effective and efficient. Mechanical What is mechanical engineering? Mechanical Engineering is focused on a learning and research environment to instruct and encourage our students to reach their full potential in technical expertise, innovative expression and collaborative design. US News and World Report Mechanical Engineer 32 in 100 Best Jobs Someone with a mechanical engineering degree has many job options for his or her career path. The skills of a mechanical engineer are needed in many industries and on many types of projects, from vehicle manufacturing to nanotechnology. Mechanical engineers are involved in the production of mechanical instruments and tools from start to finish, and their work includes aspects of design, development and testing. PROJECTED JOBS 12,800 MEDIAN SALARY $87,370 EDUCATION NEEDED Bachelor's Civil Engineer 33 in 100 Best Jobs From the street in front of your home to the Golden Gate Bridge, civil engineers are responsible for the design and maintenance of public works and facilities. Civil engineers are involved from start to finish in the process of constructing buildings, bridges and roads. PROJECTED JOBS 20,500 MEDIAN SALARY $86,640 EDUCATION NEEDED Bachelor's

Estornuda.me | Ciencia y COVID-19
Contact Tracing App para México

Estornuda.me | Ciencia y COVID-19

Play Episode Listen Later Jul 12, 2020 40:07


En este podcast reconocerás los aspectos que intrínsecamente lleva su proyecto, como por ejemplo: criptografía y seguridad informática, un tema muy importante en este momento, ya que se trata de la protección de los datos privados de los ciudadanos. La Encuesta Nacional sobre Disponibilidad y Uso de Tecnologías de la Información en los Hogares 2019 estima que en México 86 millones 500 mil personas usan teléfono celular. Teniendo en cuenta estos números, un grupo de investigación del Departamento de Computación del Cinvestav, integrado por Francisco Rodríguez Henríquez, Brisbane Ovilla Martínez y Cuauhtémoc Mancillas López, trabaja en el diseño de una aplicación (app) gratuita para teléfonos celulares inteligentes con el propósito de notificar al usuario sobre un posible riesgo de contagio por covid-19. La idea es que los aparatos intercambien, vía bluetooth, identificadores anónimos para preservar la privacidad de las personas, con el objetivo de que los usuarios puedan saber en todo momento si durante su recorrido por la ciudad, ya sea caminando o en los diversos medios de transporte público, han tenido contacto con alguna persona infectada de Covid-19 y podrían ahora ser portadores del virus. Para alertar sobre un posible riesgo de contagio, la aplicación considera diversos factores, tales como: el número de contactos, tiempo de interacción y distancia mantenida durante el encuentro con una persona portadora de covid-19. Estas variables se tienen en cuenta para calcular la probabilidad de contagio. Francisco Rodríguez obtuvo el grado de doctor en junio del 2000 en el departamento de ingeniería eléctrica y computación de la universidad de Oregon, EE. UU. De julio de 2000 a mayo de 2002 el doctor Rodríguez trabajó en compañías de seguridad informáticas de Estados Unidos y Alemania. A partir de mayo de 2002 colabora con el Departamento del CINVESTAV-IPN en la ciudad de México, donde actualmente es profesor titular y jefe del Departamento. Es coautor de más de 80 artículos técnicos y capítulos de libro. Asimismo, él es el primer autor del libro: “Cry-cryptographic Algorithms on Reconfigurable Hardware”, publicado por Springer. Desde noviembre del 2006. El Dr. Rodríguez es miembro de la Academia Mexicana de Ciencias y ha sido o es miembro del comité editorial de las revistas “Journal of Universal Computer Science” de la Universidad de Graz, Austria, “Journal of Cryptographic Engineering” de Springer, “the VLSI journal” de Elsevier, e “IEEE Transactions on Computers”. Sus principales áreas de investigación son criptografía, aritmética computacional y seguridad informática. “La aplicación te pedirá acceso al uso de bluetooth (...), ya que el bluetooth va a estar intercambiando datos en todo momento a un radio de 10 metros de forma bilateral y anónima con otros dispositivos móviles…”

RE: Engineering Radio
Disappearing Pioneers

RE: Engineering Radio

Play Episode Listen Later May 6, 2020 27:20


When women and underrepresented minorities make important contributions to science or technology, why do they later disappear from history? It’s a phenomenon that Lynn Conway, University of Michigan professor emerita of electrical engineering and computer science, has documented since her own erasure. Conway was a driving force in the very-large-scale integration, or VLSI, revolution which triggered the expansion and impact of Silicon Valley, and is credited with making modern digital systems such as cell phones and laptops possible.In this episode of RE: Engineering Radio, she reconstructs how her own contributions faded over time and why “others” lose credit.Enjoying RE: Engineering Radio so far? Rate, review and subscribe to receive notifications when new episodes go live!Apple Podcasts: https://umicheng.in/EngineeringPodSpotify: https://umicheng.in/EngineeringPodSpotifyGoogle Play: https://umicheng.in/EngineeringPodGoogle See acast.com/privacy for privacy and opt-out information.

THE VALLEY CURRENT®️ COMPUTERLAW GROUP LLP
The Valley Current ®: The Art of Entrepreneurship in the Field of Electronic Design Automation - Part 2

THE VALLEY CURRENT®️ COMPUTERLAW GROUP LLP

Play Episode Listen Later May 1, 2020 19:37


Many people are unaware that there is a whole field of technology that supplies tools, data, and products to the microchip industry. Anyone designing a new piece of technology is reliant on a whole ecosystem of suppliers to provide them with the various electronic deliveries of various chip designs types of data and knowledge products that enable them to quickly build the electronics for that new piece of tech. Billie Riviera joins The Valley Current ® today to share her knowledge about the field of electronic design automation, the tech that builds tech!

THE VALLEY CURRENT®️ COMPUTERLAW GROUP LLP
The Valley Current ®: The Art of Entrepreneurship in the Field of Electronic Design Automation - Part 1

THE VALLEY CURRENT®️ COMPUTERLAW GROUP LLP

Play Episode Listen Later Apr 27, 2020 20:49


Many people are unaware that there is a whole field of technology that supplies tools, data, and products to the microchip industry. Anyone designing a new piece of technology is reliant on a whole ecosystem of suppliers to provide them with the various electronic deliveries of various chip designs types of data and knowledge products that enable them to quickly build the electronics for that new piece of tech. Billie Riviera joins The Valley Current ® today to share her knowledge about the field of electronic design automation, the tech that builds tech! 

The History of Computing
Boolean Algebra

The History of Computing

Play Episode Listen Later Feb 8, 2020 9:24


Boolean algebra Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we're going to talk a little about math. Or logic. Computers are just a bunch of zeroes and ones, right? Binary. They make shirts about it. You know, there are 10 types of people in the world. But where did that come from? After centuries of trying to build computing devices that could help with math using gears that had lots of slots in them, armed with tubes and then transistors, we had to come up with a simpler form of logic. And why write your own complicated math when you can borrow it and have instant converts to your cause? Technical innovations are often comprised of a lot of building blocks from different fields of scientific or scholastic studies. The 0s and 1s, which make up the flip-flop circuits computers are so famous for, are made possible by the concept that all logic can be broken down into either true or false. And so the mathematical logic that we have built trillions of dollars in industry off of began in 1847 in a book called The Mathematical Analysis of Logic, by George Boole. He would follow that up in a book called An Investigation of the Laws of Thought in 1854. He was he father of what we would later call Boolean Algebra once the science of an entire mathematical language built on true and false matured enough for Charles Sanders Peirce wrote a book called The Simplest Mathematics and had a title called Boolian Algebra with One Constant. By 1913, there were many more works with the name and it became Boolean algebra. This was right around the time that the electronic research community had first started experimenting with using vacuum tubes as flip-flop switches. So there's elementary algebra where you can have any old number with any old logical operation. Those operators can be addition, subtraction, multiplication, division, etc. But in boolean algebra the only variables available are a 0 or a 1. Later we would get abstract algebra as well, but for computing it was way simpler to just stick with those 0s and 1s and in fact, ditching the gears from the old electromechanical computing paved the way for tubes to act as flip-flop switches, and transistors to replace those. And the evolutions came. Both to the efficiency of flip-flop switches and to the increasingly complex uses for mechanical computing devices. But they hadn't all been mashed up together. So set theory and statistics were evolving. And Huntington, Jevons, Schröder, basically perfected Boolean logic, paving the way for MH Stone to provide that Boolean algebra is isomorphic to a field of sets by 1936. And so it should come as no surprise that Boolean algebra would be key to the development of basic mathematical functions used on the Berry-Attansoff computer. Remember that back then, all computing was basically used for math. Claude Shannon would help apply Boolean algebra to switching circuits. This involved binary decision diagrams for synthesizing and verifying the design of logic circuits. And so we could analyze and design circuits using algebra to define logic gates. Those gates would get smaller and faster and combined using combinational logic until we got LSI circuits and later with the automation of the design of chips, VLSI. So to put it super-simple, let's say you are trying to do some maths. First up, you convert values to bits, which are binary digits. Those binary digits would be represented as a 0 or a 1, expressed in binary algebra as . There's a substantial amount of information you can pack into those bits, with all major characters easily allowed for in a byte, which is 8 of those bits. So let's say you also map your algebraic operators using those 0s and 1s, another byte. Now you can add the number in the first byte. To do so though, you would need to basically translate the notations from classical propositional calculus to their expression in Boolean algebra, typically done in an assembler. Much, much more logic is required to apply quantifiers. And simple true values are 0 and 1 but have a one step truth table to define AND (also known as a conjunction), OR (also known as a disjunction), and NOT XOR (also known as an exclusive-or). This allows for an exponential increase in the amount of logic you can apply to a problem. The act of deceasing if the problem satisfies the ability to translate into boolean capabilities is known as the Boolean satisfiability problem or SAT. At this point though, all problems really seem solvable using some model of computation given the amount of complex circuitry we now have. So the computer interprets information the functions and sets the state of a switch based on the input. The computer then combines all those trues and false into the necessary logic and outputs an answer. Because the 0s and 1s took too much the input got moved to punch cards, and modern programming was born. These days we can also add Boolean logic into higher functions, such as running AND for google searches. So ultimately the point of this episode is to explore what exactly all those 0s and 1s are. They're complex thoughts and formulas expressed as true and false using complicated Boolean algebra to construct them. Now, there's a chance that some day we'll find something beyond a transistor. And then we can bring a much more complicated expression of thought broken down into different forms of algebra. But there's also the chance that Boolean algebra sitting on transistors or other things that are the next evolution of boolean gates or transistors is really, well, kinda' it. So from the Barry-Attansoff computer comes Colossus and then ENIAC in 1945. It wasn't obvious yet but nearly 100 years after the development of Boolean algebra, it had been combined with several other technologies to usher in the computing revolution, setting up the evolution to microprocessors and the modern computer. These days, few programmers are constrained by programming in Boolean logic. Instead, we have many more options. Although I happen to believe that understanding this fundamental building block was one of the most important aspects of studying computer science and provided an important foundation to computing in general. So thank you for listening to this episode. I'm sure algebra got ya' totally interested and that you're super-into math. But thanks for listening anyways. I'm pretty lucky to have ya'. Have a great day

#SuccessInSight
Tom Dillinger, Author of VLSI Design Methodology Development

#SuccessInSight

Play Episode Listen Later Dec 26, 2019 37:00


Tom Dillinger is an accomplished Electrical Engineer and the author of VLSI Design Methodology Development.An authority in Microprocessor design, Tom carefully introduces core concepts, and then guides engineers through modeling, functional design validation, design implementation, electrical analysis, and release to manufacturing.This guide is for all VLSI system designers, senior undergraduate or graduate students of microelectronics design, and companies offering internal courses for engineers at all levels. It is applicable to engineering teams undertaking new projects and migrating existing designs to new technologies.Click here is find VLSI Design Methodology Development on Amazon.Click here to to learn more about Tom and his book, on his website at Click here to to learn more about Tom, and connect with his on LinkedInThe SuccessInSight Podcast is a production of Fox Coaching, Inc. and First Story Strategies.Link to Success InSight Podcast: https://www.successinsightpodcast.com/2019/12/tom-dillinger.html

The History of Computing
The Tetris Negotiations

The History of Computing

Play Episode Listen Later Oct 14, 2019 12:58


The Tetris Negotiations Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the origins of Tetris. I'll never forget the first time I saw St. Basil's as I was loading Tetris up. I'll never get those hours back that I played it. Countless hours. So how did it all begin? OK, so check this out. It's 1984. Los Angeles hosts the olympics and the Russians refuse to come. But then, the US had refused to come to Moscow when they had the Olympics. I am fairly certain that someone stuck their tongue out at someone else. It happens a lot in preschool. One may have even given the middle finger. That's what middle school is named after, right? It was a recession. Microchips were getting cheap. And Digital Equipment Corporation's PDP was one of the best computers ever made. It wasn't exactly easy to come by in Russia though. The microcomputer was becoming a thing. And Alexey Pajitnov was working at the Dorodnitsyn Computing Centre of the Soviet Academy of Sciences. As with a lot of R&D places, they were experimenting with new computers. The Electronika 60 was so similar to the PDP that DEC chip designers printed jokes on chips just for Russians looking to steal their chip designs. They actually managed to get a quarter million ops per second 5 VLSI chips with 8k of RAM. They needed to do some random number generation and visualization. Good ole' Alexey was told to write some software to test the Electronika. He thought, ya' know, I should do a game. The beauty of writing games is that they can be math intensive, so perfect for benchmarking. But what kind of game? When he was a kid, he'd loved to play pentomino games. That's a game where there are 5 squares connected in one of 12 ways. Reduce that to 4 and it's 7. The thing is, when you have 5 the math was a little too intense. And it was a little easier with 4 blocks. He drew them on the screen in ascii and had the tetraminos fall down the screen. The games ended pretty quick, so he added an additional feature that deleted rows once they were complete. Then, he sped the falling speed up as you cleared a level. You had to spin your puzzle pieces faster, the further you got. And once you're addicted, you turn and turn and turn and turn. No frills, just fun. It needed a name though. Since you're spinning 4 blocks or Greet for tetras, it seemed like it got mashed up with tennis for tetraminiss. No, tetraminoss. Wait, cut a syllable here and there and you get to Tetris. They have 7 shapes, ijlostz. The IBM PC version ran with the I as maroon, the j as silver, the l as purple, the o as navy, the green is s, the brown is t, the teal is z. He got a little help from help of Dmitry Pavlovsky and 16 year old programming whiz, Vadim Gerasimov. Probably because they were already hopelessly addicted to the game. They ported it to the fancy schmancy IBM PC in about two months, and it started to spread around Moscow. By now, his coworkers are freakin' hooked. This was the era of disk sharing. And disks were certainly being shared. But it didn't stop there. It leaked out all over the place, making its way to Budapest, where it ended up on a machine at British-based game maker Andromeda. CEO Robert Stein sends a Telex to Dmitry Pavlovsky. He offers 75% royalties on sales and $10,000. Pretty standard stuff so far, but this is where it gets fun. So Pavlovsky responds that they should maybe negotiate a contract. But Andromeda had already sold the rights to spectrum holobyte and so attempted to license the software from the Hungarian developers that did the porting. Then realized that was dumb and went back to the negotiating table, getting it done for “computers.” All license deals went through the USSR at the time, and the Russian government was happy to take over the contract negotiations. So the USSR's Ministry of Software and Hardware Export gets involved. Through a company they setup for this called Elektronorgtechnica or ELORG, they negotiated the contract and did. That's how by 87 Tetris spreads to the US. In fact, Tetris was the first game released from ussr to USA and was for Commodore 64 and IBM PC. It was so simple it was sold as a budget title. Apple II package came with three versions on three disks, 5.25 inch, not copy protected yet. Can you say honor system. In 1988, Henk Rogers discovers Tetris at a trade show in Vegas and gets all kinds of hooked. Game consoles had been around for a long time, and anyone who paid attention knew that a number of organizations around the world were looking to release handhelds. Now, Old Henk was the Dutch video game designer behind a role playing game called The Black Onyx and had been looking for his next thing and customers. When he saw Tetris, he knew it was something special. He also knew the Game Boy was coming and thought maybe this was the killer game for the platform. He did his research and contacted Stein from Andromeda to get the rights to make a version for mobiles. Stein was into it but wasn't on the best of terms with the Russian government because he was a little late in his royalty payments. Months went by. Henk didn't hear back. Spectrum HoloByte got wind as well and sent Kevin Maxwell to Moscow to get the rights. Realizing his cash cow was getting in danger, old Stein from Andromeda also decided to hop on a plane and go to Moscow. They each met with the Russians separately in about a three day span. Henk Rogers is a good dude. As a developer who'd been dealing with rights to his own game, he decided the best way to handle the Russians was to actually just lay out how it all worked. He gave them a crash course in the evolving world of computer vs mobile license agreements in an international world. The Russians showed him their contracts with Andromeda. He told them how it should all really be. They realized Andromeda wasn't giving them the best of deals. Henk also showed them a game that there's no rights deal for. Whether all this was intentionally screwing the other parties or not is for history, but by the time he walked out he'd make a buck per copy that went on the Gameboy. There was other wrangling with the other two parties including an incident where the Russians sent a fax they knew Maxwell couldn't get in order to get out of a clause in a contract. This all set up a few law suits and resulted in certain versions in certain countries shipping then being pulled back off the shelf. Fun times. But because of it all, in 1989 the Game Boy was released. Henk was right, Tetris turned out to be the killer app for the platform. Until Minecraft came along it was the most popular game of all time, selling over 30 million copies. And ranked #5 in the 100 best Nintendo games. It was the first Game Boy game that came with the ability to link up to other Game Boys and you could play against your friends. Back then you needed to use a cable to coop. The field was 10 wide and 18 high in game boy and it was set to music from Nintendo composer Hirokazu Tanaka. The Berlin Wall is torn down in 1989. I suspect that was part of the negotiations with Game Boy. Can you imagine Gorbetrev and Reagan with their Game Boys linked up playing Tetris for hours over the fate of Germany? ‘Cause I can. You probably think there were much more complicated negotiations taking place. I do not. I tend to think Reagan's excellent Tetris skills ended the Cold War. So Pajitnov's friend Vladimir Pokhilko had done some work on the game as well and in 1989. He ran psychological experiments using the game and with that research, the two would found a company called AnimaTek. They would focus on 3D software, releasing El-fish through Maxis. While Tetris had become the most successful game of all time, Pokhilko was in a dire financial crisis and would commit suicide. There's more to that story but it's pretty yuck so we'll leave it at that. Pajitnov, the original developer, finally got royalties in 1996 when the Parastroika agreement ended. Because Henk Rogers had been a good dude, they formed The Tetris Company together to manage licensing of the game. Pajitnov went to work at Microsoft in 1996, working on games. The story has become much more standard since then. Although in 2012 the US Court on International Trade responded to some requests to shut competitors down noting that the US Copyright didn't apply to rules of game and so Tetris did file other patents and trademarks to limit how close competitors could get to the original game mechanics. After studying at the MIT Media Lab, that 16 year old programmer, Vadim Gerasimov went to become an engineer at Google. Henk Rogers serves as the Managing Director of The Tetris Company. Since designing Tetris, Pajitnov has made a couple dozen other games, with Marbly on iOS being the latest success. It needs a boss button. Tetris has been released on arcade games, home consoles, mobile devices, pdas, music players, computers, Oscilloscope Easter eggs and I'm pretty sure it now has its own planet or 4. It probably owes some of its success to the fact that it makes people smarter. Dr Richard hayer claims Tetris leads to more efficient brain activity. Boosts general cognitive abilities. Improved cerebral cortex thickness. If my cortex were thicker I'd probably research effects of games as a means of justifying the countless hours I wanted to spend on them too. So that's the story of Tetris. It ended the Cold War, makes you smarter, and now Alexy gets a cut of every cent you spend on it. So he'll likely thank you for your purchase. Just as I thank you for tuning in to another episode of the History of Computing Podcast. We're so very lucky to have you. Have a great day! Now back to my game!

The History of Computing
The Evolution Of The Microchip

The History of Computing

Play Episode Listen Later Sep 13, 2019 31:14


The Microchip Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Todays episode is on the history of the microchip, or microprocessor. This was a hard episode, because it was the culmination of so many technologies. You don't know where to stop telling the story - and you find yourself writing a chronological story in reverse chronological order. But few advancements have impacted humanity the way the introduction of the microprocessor has. Given that most technological advances are a convergence of otherwise disparate technologies, we'll start the story of the microchip with the obvious choice: the light bulb. Thomas Edison first demonstrated the carbon filament light bulb in 1879. William Joseph Hammer, an inventor working with Edison, then noted that if he added another electrode to a heated filament bulb that it would glow around the positive pole in the vacuum of the bulb and blacken the wire and the bulb around the negative pole. 25 years later, John Ambrose Fleming demonstrated that if that extra electrode is made more positive than the filament the current flows through the vacuum and that the current could only flow from the filament to the electrode and not the other direction. This converted AC signals to DC and represented a boolean gate. In the 1904 Fleming was granted Great Britain's patent number 24850 for the vacuum tube, ushering in the era of electronics. Over the next few decades, researchers continued to work with these tubes. Eccles and Jordan invented the flip-flop circuit at London's City and Guilds Technical College in 1918, receiving a patent for what they called the Eccles-Jordan Trigger Circuit in 1920. Now, English mathematician George Boole back in the earlier part of the 1800s had developed Boolean algebra. Here he created a system where logical statements could be made in mathematical terms. Those could then be performed using math on the symbols. Only a 0 or a 1 could be used. It took awhile, John Vincent Atanasoff and grad student Clifford Berry harnessed the circuits in the Atanasoff-Berry computer in 1938 at Iowa State University and using Boolean algebra, successfully solved linear equations but never finished the device due to World War II, when a number of other technological advancements happened, including the development of the ENIAC by John Mauchly and J Presper Eckert from the University of Pennsylvania, funded by the US Army Ordinance Corps, starting in 1943. By the time it was taken out of operation, the ENIAC had 20,000 of these tubes. Each digit in an algorithm required 36 tubes. Ten digit numbers could be multiplied at 357 per second, showing the first true use of a computer. John Von Neumann was the first to actually use the ENIAC when they used one million punch cards to run the computations that helped propel the development of the hydrogen bomb at Los Alamos National Laboratory. The creators would leave the University and found the Eckert-Mauchly Computer Corporation. Out of that later would come the Univac and the ancestor of todays Unisys Corporation. These early computers used vacuum tubes to replace gears that were in previous counting machines and represented the First Generation. But the tubes for the flip-flop circuits were expensive and had to be replaced way too often. The second generation of computers used transistors instead of vacuum tubes for logic circuits. The integrated circuit is basically a wire set into silicon or germanium that can be set to on or off based on the properties of the material. These replaced vacuum tubes in computers to provide the foundation of the boolean logic. You know, the zeros and ones that computers are famous for. As with most modern technologies the integrated circuit owes its origin to a number of different technologies that came before it was able to be useful in computers. This includes the three primary components of the circuit: the transistor, resistor, and capacitor. The silicon that chips are so famous for was actually discovered by Swedish chemist Jöns Jacob Berzelius in 1824. He heated potassium chips in a silica container and washed away the residue and viola - an element! The transistor is a semiconducting device that has three connections that amplify data. One is the source, which is connected to the negative terminal on a battery. The second is the drain, and is a positive terminal that, when touched to the gate (the third connection), the transistor allows electricity through. Transistors then acts as an on/off switch. The fact they can be on or off is the foundation for Boolean logic in modern computing. The resistor controls the flow of electricity and is used to control the levels and terminate lines. An integrated circuit is also built using silicon but you print the pattern into the circuit using lithography rather than painstakingly putting little wires where they need to go like radio operators did with the Cats Whisker all those years ago. The idea of the transistor goes back to the mid-30s when William Shockley took the idea of a cat's wicker, or fine wire touching a galena crystal. The radio operator moved the wire to different parts of the crystal to pick up different radio signals. Solid state physics was born when Shockley, who first studied at Cal Tech and then got his PhD in Physics, started working on a way to make these useable in every day electronics. After a decade in the trenches, Bell gave him John Bardeen and Walter Brattain who successfully finished the invention in 1947. Shockley went on to design a new and better transistor, known as a bipolar transistor and helped move us from vacuum tubes, which were bulky and needed a lot of power, to first gernanium, which they used initially and then to silicon. Shockley got a Nobel Prize in physics for his work and was able to recruit a team of extremely talented young PhDs to help work on new semiconductor devices. He became increasingly frustrated with Bell and took a leave of absence. Shockley moved back to his hometown of Palo Alto, California and started a new company called the Shockley Semiconductor Laboratory. He had some ideas that were way before his time and wasn't exactly easy to work with. He pushed the chip industry forward but in the process spawned a mass exodus of employees that went to Fairchild in 1957. He called them the “Traitorous 8” to create what would be Fairchild Semiconductors. The alumni of Shockley Labs ended up spawning 65 companies over the next 20 years that laid foundation of the microchip industry to this day, including Intel. . If he were easier to work with, we might not have had the innovation that we've seen if not for Shockley's abbrasiveness! All of these silicon chip makers being in a small area of California then led to that area getting the Silicon Valley moniker, given all the chip makers located there. At this point, people were starting to experiment with computers using transistors instead of vacuum tubes. The University of Manchester created the Transistor Computer in 1953. The first fully transistorized computer came in 1955 with the Harwell CADET, MIT started work on the TX-0 in 1956, and the THOR guidance computer for ICBMs came in 1957. But the IBM 608 was the first commercial all-transistor solid-state computer. The RCA 501, Philco Transac S-1000, and IBM 7070 took us through the age of transistors which continued to get smaller and more compact. At this point, we were really just replacing tubes with transistors. But the integrated circuit would bring us into the third generation of computers. The integrated circuit is an electronic device that has all of the functional blocks put on the same piece of silicon. So the transistor, or multiple transistors, is printed into one block. Jack Kilby of Texas Instruments patented the first miniaturized electronic circuit in 1959, which used germanium and external wires and was really more of a hybrid integrated Circuit. Later in 1959, Robert Noyce of Fairchild Semiconductor invented the first truly monolithic integrated circuit, which he received a patent for. While doing so independently, they are considered the creators of the integrated circuit. The third generation of computers was from 1964 to 1971, and saw the introduction of metal-oxide-silicon and printing circuits with photolithography. In 1965 Gordon Moore, also of Fairchild at the time, observed that the number of transistors, resistors, diodes, capacitors, and other components that could be shoved into a chip was doubling about every year and published an article with this observation in Electronics Magazine, forecasting what's now known as Moore's Law. The integrated circuit gave us the DEC PDP and later the IBM S/360 series of computers, making computers smaller, and brought us into a world where we could write code in COBOL and FORTRAN. A microprocessor is one type of integrated circuit. They're also used in audio amplifiers, analog integrated circuits, clocks, interfaces, etc. But in the early 60s, the Minuteman missal program and the US Navy contracts were practically the only ones using these chips, at this point numbering in the hundreds, bringing us into the world of the MSI, or medium-scale integration chip. Moore and Noyce left Fairchild and founded NM Electronics in 1968, later renaming the company to Intel, short for Integrated Electronics. Federico Faggin came over in 1970 to lead the MCS-4 family of chips. These along with other chips that were economical to produce started to result in chips finding their way into various consumer products. In fact, the MCS-4 chips, which split RAM , ROM, CPU, and I/O, were designed for the Nippon Calculating Machine Corporation and Intel bought the rights back, announcing the chip in Electronic News with an article called “Announcing A New Era In Integrated Electronics.” Together, they built the Intel 4004, the first microprocessor that fit on a single chip. They buried the contacts in multiple layers and introduced 2-phase clocks. Silicon oxide was used to layer integrated circuits onto a single chip. Here, the microprocessor, or CPU, splits the arithmetic and logic unit, or ALU, the bus, the clock, the control unit, and registers up so each can do what they're good at, but live on the same chip. The 1st generation of the microprocessor was from 1971, when these 4-bit chips were mostly used in guidance systems. This boosted the speed by five times. The forming of Intel and the introduction of the 4004 chip can be seen as one of the primary events that propelled us into the evolution of the microprocessor and the fourth generation of computers, which lasted from 1972 to 2010. The Intel 4004 had 2,300 transistors. The Intel 4040 came in 1974, giving us 3,000 transistors. It was still a 4-bit data bus but jumped to 12-bit ROM. The architecture was also from Faggin but the design was carried out by Tom Innes. We were firmly in the era of LSI, or Large Scale Integration chips. These chips were also used in the Busicom calculator, and even in the first pinball game controlled by a microprocessor. But getting a true computer to fit on a chip, or a modern CPU, remained an elusive goal. Texas Instruments ran an ad in Electronics with a caption that the 8008 was a “CPU on a Chip” and attempted to patent the chip, but couldn't make it work. Faggin went to Intel and they did actually make it work, giving us the first 8-bit microprocessor. It was then redesigned in 1972 as the 8080. A year later, the chip was fabricated and then put on the market in 1972. Intel made the R&D money back in 5 months and sparked the idea for Ed Roberts to build The Altair 8800. Motorola and Zilog brought competition in the 6900 and Z-80, which was used in the Tandy TRS-80, one of the first mass produced computers. N-MOSs transistors on chips allowed for new and faster paths and MOS Technology soon joined the fray with the 6501 and 6502 chips in 1975. The 6502 ended up being the chip used in the Apple I, Apple II, NES, Atari 2600, BBC Micro, Commodore PET and Commodore VIC-20. The MOS 6510 variant was then used in the Commodore 64. The 8086 was released in 1978 with 3,000 transistors and marked the transition to Intel's x86 line of chips, setting what would become the standard in future chips. But the IBM wasn't the only place you could find chips. The Motorola 68000 was used in the Sun-1 from Sun Microsystems, the HP 9000, the DEC VAXstation, the Comodore Amiga, the Apple Lisa, the Sinclair QL, the Sega Genesis, and the Mac. The chips were also used in the first HP LaserJet and the Apple LaserWriter and used in a number of embedded systems for years to come. As we rounded the corner into the 80s it was clear that the computer revolution was upon us. A number of computer companies were looking to do more than what they could do with he existing Intel, MOS, and Motorola chips. And ARPA was pushing the boundaries yet again. Carver Mead of Caltech and Lynn Conway of Xerox PARC saw the density of transistors in chips starting to plateau. So with DARPA funding they went out looking for ways to push the world into the VLSI era, or Very Large Scale Integration. The VLSI project resulted in the concept of fabless design houses, such as Broadcom, 32-bit graphics, BSD Unix, and RISC processors, or Reduced Instruction Set Computer Processor. Out of the RISC work done at UC Berkely came a number of new options for chips as well. One of these designers, Acorn Computers evaluated a number of chips and decided to develop their own, using VLSI Technology, a company founded by more Fairchild Semiconductor alumni) to manufacture the chip in their foundry. Sophie Wilson, then Roger, worked on an instruction set for the RISC. Out of this came the Acorn RISC Machine, or ARM chip. Over 100 billion ARM processors have been produced, well over 10 for every human on the planet. You know that fancy new A13 that Apple announced. It uses a licensed ARM core. Another chip that came out of the RISC family was the SUN Sparc. Sun being short for Stanford University Network, co-founder Andy Bchtolsheim, they were close to the action and released the SPARC in 1986. I still have a SPARC 20 I use for this and that at home. Not that SPARC has gone anywhere. They're just made by Oracle now. The Intel 80386 chip was a 32 bit microprocessor released in 1985. The first chip had 275,000 transistors, taking plenty of pages from the lessons learned in the VLSI projects. Compaq built a machine on it, but really the IBM PC/AT made it an accepted standard, although this was the beginning of the end of IBMs hold on the burgeoning computer industry. And AMD, yet another company founded by Fairchild defectors, created the Am386 in 1991, ending Intel's nearly 5 year monopoly on the PC clone industry and ending an era where AMD was a second source of Intel parts but instead was competing with Intel directly. We can thank AMD's aggressive competition with Intel for helping to keep the CPU industry going along Moore's law! At this point transistors were only 1.5 microns in size. Much, much smaller than a cats whisker. The Intel 80486 came in 1989 and again tracking against Moore's Law we hit the first 1 million transistor chip. Remember how Compaq helped end IBM's hold on the PC market? When the Intel 486 came along they went with AMD. This chip was also important because we got L1 caches, meaning that chips didn't need to send instructions to other parts of the motherboard but could do caching internally. From then on, the L1 and later L2 caches would be listed on all chips. We'd finally broken 100MHz! Motorola released the 68050 in 1990, hitting 1.2 Million transistors, and giving Apple the chip that would define the Quadra and also that L1 cache. The DEC Alpha came along in 1992, also a RISC chip, but really kicking off the 64-bit era. While the most technically advanced chip of the day, it never took off and after DEC was acquired by Compaq and Compaq by HP, the IP for the Alpha was sold to Intel in 2001, with the PC industry having just decided they could have all their money. But back to the 90s, ‘cause life was better back when grunge was new. At this point, hobbyists knew what the CPU was but most normal people didn't. The concept that there was a whole Univac on one of these never occurred to most people. But then came the Pentium. Turns out that giving a chip a name and some marketing dollars not only made Intel a household name but solidified their hold on the chip market for decades to come. While the Intel Inside campaign started in 1991, after the Pentium was released in 1993, the case of most computers would have a sticker that said Intel Inside. Intel really one upped everyone. The first Pentium, the P5 or 586 or 80501 had 3.1 million transistors that were 16.7 micrometers. Computers kept getting smaller and cheaper and faster. Apple answered by moving to the PowerPC chip from IBM, which owed much of its design to the RISC. Exactly 10 years after the famous 1984 Super Bowl Commercial, Apple was using a CPU from IBM. Another advance came in 1996 when IBM developed the Power4 chip and gave the world multi-core processors, or a CPU that had multiple CPU cores inside the CPU. Once parallel processing caught up to being able to have processes that consumed the resources on all those cores, we saw Intel's Pentium D, and AMD's Athlon 64 x2 released in May 2005 bringing multi-core architecture to the consumer. This led to even more parallel processing and an explosion in the number of cores helped us continue on with Moore's Law. There are now custom chips that reach into the thousands of cores today, although most laptops have maybe 4 cores in them. Setting multi-core architectures aside for a moment, back to Y2K when Justin Timberlake was still a part of NSYNC. Then came the Pentium Pro, Pentium II, Celeron, Pentium III, Xeon, Pentium M, Xeon LV, Pentium 4. On the IBM/Apple side, we got the G3 with 6.3 million transistors, G4 with 10.5 million transistors, and the G5 with 58 million transistors and 1,131 feet of copper interconnects, running at 3GHz in 2002 - so much copper that NSYNC broke up that year. The Pentium 4 that year ran at 2.4 GHz and sported 50 million transistors. This is about 1 transistor per dollar made off Star Trek: Nemesis in 2002. I guess Attack of the Clones was better because it grossed over 300 Million that year. Remember how we broke the million transistor mark in 1989? In 2005, Intel started testing Montecito with certain customers. The Titanium-2 64-bit CPU with 1.72 billion transistors, shattering the billion mark and hitting a billion two years earlier than projected. Apple CEO Steve Jobs announced Apple would be moving to the Intel processor that year. NeXTSTEP had been happy as a clam on Intel, SPARC or HP RISC so given the rapid advancements from Intel, this seemed like a safe bet and allowed Apple to tell directors in IT departments “see, we play nice now.” And the innovations kept flowing for the next decade and a half. We packed more transistors in, more cache, cleaner clean rooms, faster bus speeds, with Intel owning the computer CPU market and AMD slowly growing from the ashes of Acorn computer into the power-house that AMD cores are today, when embedded in other chips designs. I'd say not much interesting has happened, but it's ALL interesting, except the numbers just sound stupid they're so big. And we had more advances along the way of course, but it started to feel like we were just miniaturizing more and more, allowing us to do much more advanced computing in general. The fifth generation of computing is all about technologies that we today consider advanced. Artificial Intelligence, Parallel Computing, Very High Level Computer Languages, the migration away from desktops to laptops and even smaller devices like smartphones. ULSI, or Ultra Large Scale Integration chips not only tells us that chip designers really have no creativity outside of chip architecture, but also means millions up to tens of billions of transistors on silicon. At the time of this recording, the AMD Epic Rome is the single chip package with the most transistors, at 32 billion. Silicon is the seventh most abundant element in the universe and the second most in the crust of the planet earth. Given that there's more chips than people by a huge percentage, we're lucky we don't have to worry about running out any time soon! We skipped RAM in this episode. But it kinda' deserves its own, since RAM is still following Moore's Law, while the CPU is kinda' lagging again. Maybe it's time for our friends at DARPA to get the kids from Berkley working at VERYUltra Large Scale chips or VULSIs! Or they could sign on to sponsor this podcast! And now I'm going to go take a VERYUltra Large Scale nap. Gentle listeners I hope you can do that as well. Unless you're driving while listening to this. Don't nap while driving. But do have a lovely day. Thank you for listening to yet another episode of the History of Computing Podcast. We're so lucky to have you!

Poziom niżej
#005 - Quo Vadis ARM?

Poziom niżej

Play Episode Listen Later Jul 5, 2019 96:20


W piątym odcinku zastanowimy się jaka przyszłość stoi przed architekturą ARM. Przedstawiamy wam historię powstania firmy ARM Holdings, tłumaczymy dlaczego energooszczędność nigdy nie idzie w parze z wydajnością oraz dlaczego procesory ARM są wewnątrz bardzo podobne do procesorów Intel x86. Przy okazji wyjaśniamy dlaczego wydajność nie zależy od listy rozkazowej oraz dlaczego prawo Moore'a przestało obowiązywać. Główną osią odcinka jest jednak odwiecznie nurtujące nas pytanie: “Dlaczego architektura ARM nie gości (mimo wielkich wysiłków) na PC oraz na serwerach?”. Starając się odpowiedzieć na to pytanie dryfujemy w różnych kierunkach, od standaryzacji po globalną politykę na styku USA i Chin. Odcinek kończymy nieco żartobliwą dyskusją na temat RISC-V oraz odnosimy się do komentarza Linusa Torvaldsa.Prowadzący: Radosław Biernacki, Rafał Jaworowski, Maciej Czekaj, Marcin WojtasHashtag: ARM, AArch64, ARMv8, ARm on ARM, RISC-V### Plan odcinka# (0:50) Historia firmy ARM# (3:28) Czym wyróżnia się firma ARM# (7:42) Na czym zarabia ARM?# (8:17) Modele współpracy z firmą ARM (poziomy licencji)# (15:32) Wyzwania przy tworzeniu całkiem nowej architektury# (22:06) Mit energooszczędności ARM# (28:13) Co zużywa najwięcej energii w CPU?# (33:25) Dlaczego ARM nie istnieje w świecie PC?# (42:39) Próby stworzenia ARM PC# (44:27) Dlaczego firma ARM nie wspiera ARM PC# (46:40) Problem GPU na ARM (optional ROM)# (49:13) Problem kompatybilności SW na ARM# (53:14) Co jest potrzebne do adopcji ARM w serwerach# (54:46) Polityka globalna w HPC# (56:45) Wojna cenowa w HPC# (1:01:23) Problem standaryzacji w serwerach# (1:08:30) Dlaczego ARM nie wyprodukował CPU serwerowego?# (1:10:35) Poważne konsekwencje bierności ARM# (1:11:09) Czy w ogóle ARM chce wejść na rynek serwerowy?# (1:14:42) Procentowy udział ARM w rynkach procesorów# (1:16:54) Co przekonuje kupujących do zmiany?# (1:22:40) A może RISC V?# (1:30:12) A Linus powiedział że...Odnośniki(0:50) ARM Architecture history - https://en.wikipedia.org/wiki/ARM_architecture#History(1:14) ACorn - https://en.wikipedia.org/wiki/Acorn_Computers(1:30) BBC micro - https://en.wikipedia.org/wiki/BBC_Micro(1:59) VLSI - https://en.wikipedia.org/wiki/VLSI_Technology(2:35) 68000 - https://en.wikipedia.org/wiki/Motorola_68000(2:21) ARM 1 - https://en.wikichip.org/wiki/acorn/microarchitectures/arm1(4:24) Apple Newton - https://en.wikipedia.org/wiki/Apple_Newton(8:30) How ARM’s business model works - https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/2(12:52) Atmel - Microchip - https://en.wikipedia.org/wiki/Atmel(13:47) Cortex - https://en.wikipedia.org/wiki/ARM_Cortex-A(14:35) Marvell - https://en.wikipedia.org/wiki/Marvell_Technology_Group(15:00) wersje ARM - https://www.cs.umd.edu/~meesh/cmsc411/website/proj01/arm/armchip.html(15:35) Polski Procesor D32PRO - https://pclab.pl/news65816.html(18:33) - przykład reverse engineer’ingu CPU do BLE - https://github.com/sylvek/itracing2/issues/5#issuecomment-226080683(19:39) Parallella - https://www.parallella.org/board/(21:38) Qualcomm Centriq - https://en.wikipedia.org/wiki/Qualcomm_Centriq(21:44) Cavium - Marvell Thunder - https://www.marvell.com/server-processors/thunderx-arm-processors/(21:46) APM X-Gene - https://www.apm.com/products/data-center/x-gene-family/x-gene/(21:49) Broadcomm Snapdragon - https://en.wikipedia.org/wiki/Qualcomm_Snapdragon(24:59) Arm Delivers on Cortex A76 Promises: What it Means for 2019 Devices - https://www.anandtech.com/show/13614/arm-delivers-on-cortex-a76-promises(28:25) Way-Predicting Set-Associative Cache for High Performance and Low Energy Consumption http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.135.5610&rep=rep1&type=pdf(29:12) Power Wall - 45-year CPU evolution: one law and two equations - https://arxiv.org/pdf/1803.00254.pdf(31:02) Static power loss - Leakage Current: Moore’s Law Meets Static Power - http://www.ruf.rice.edu/~mobile/elec518/readings/DevicesAndCircuits/kim03leakage.pdf(32:51) Cortex A73 overview - https://www.anandtech.com/show/10347/arm-cortex-a73-artemis-unveiled(35:30) Raspbian - https://www.raspberrypi.org/downloads/raspbian/(36:17) Cortex-A - https://developer.arm.com/ip-products/processors/cortex-a(36:20) ARM GIC - https://developer.arm.com/ip-products/system-ip/system-controllers/interrupt-controllers(37:05) SBSA - https://developer.arm.com/architectures/platform-design/server-systems(37:28) ACPI - http://uefi.org/sites/default/files/resources/ACPI_6_2.pdf(40:20) Macchiatobin - http://macchiatobin.net/(42:04) Arm on Arm - https://www.youtube.com/watch?v=rl0sls6vnmk(43:15) SocioNext SynQuacer - https://www.socionext.com/en/products/assp/SynQuacer/Edge/(45:30) ARM roadshow slides 2018 - https://www.arm.com/-/media/global/company/investors/PDFs/Arm_SBG_Q4_2018_Roadshow_Slides_FINAL.pdf?revision=ebab8585-b3df-4235-b515-c3ef20379baf&la=en(48:07) EDK2 - https://github.com/tianocore/edk2(48:12) x86 Option ROM for ARM - https://www.suse.com/c/revolutionizing-arm-technology-x86_64-option-rom-aarch64/(48:17) Commit do ARM GPU - https://github.com/tianocore/edk2-non-osi/commit/77b5eefd9(50:28) Open Compute Project - https://en.wikipedia.org/wiki/Open_Compute_Project(52:54) Stacja Robocza ThunderX - https://www.asacomputers.com/Cavium-ThunderX-ARM.html(55:00) Kumpeng 920 - https://www.servethehome.com/huawei-kunpeng-920-64-core-arm-server-cpu/(57:19) PowerPC - https://en.wikipedia.org/wiki/PowerPC(57:27) SPARC - https://en.wikipedia.org/wiki/SPARC(1:00:37) Linaro - https://en.wikipedia.org/wiki/Linaro(1:00:54) RAS - https://www.kernel.org/doc/html/v4.14/admin-guide/ras.html(1:04:37) Amazon Graviton - https://en.wikichip.org/wiki/annapurna_labs/alpine/al73400(1:05:00) Amazon EC2 - https://aws.amazon.com/ec2/instance-types/a1/(1:06:43) Jon Masters - https://www.linkedin.com/in/jonmasters/(1:07:48) Intel wpiera rozwój AI - https://software.intel.com/en-us/devcloud/datacenter(1:09:42) ARM roadshow slides 2018 - https://www.arm.com/-/media/global/company/investors/PDFs/Arm_SBG_Q4_2018_Roadshow_Slides_FINAL.pdf?revision=ebab8585-b3df-4235-b515-c3ef20379baf&la=en(1:10:47) Qualcomm zamyka dział serwerowy - https://www.tomshardware.com/news/qualcomm-server-chip-exit-china-centriq-2400,38223.html(1:13:22) Galileo, Edison, Julie, Curie - https://software.intel.com/en-us/iot/hardware/discontinued(1:15:02) ARM roadshow slides 2018 - https://www.arm.com/-/media/global/company/investors/PDFs/Arm_SBG_Q4_2018_Roadshow_Slides_FINAL.pdf?revision=ebab8585-b3df-4235-b515-c3ef20379baf&la=en(1:18:00) AARch64 virtualization - https://developer.arm.com/docs/100942/latest/aarch64-virtualization(1:18:31) Cavium ThunderX2 Review and Benchmarks a Real Arm Server Optionhttps://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/(1:19:22) SRIOV - https://en.wikipedia.org/wiki/Single-root_input/output_virtualization(1:21:25) Octeon TX - https://www.marvell.com/embedded-processors/infrastructure-processors/octeon-tx-multi-core-armv8-processors/index.jsp(1:22:58) RISC V - https://en.wikipedia.org/wiki/RISC-V(1:26:50) WD i RISC V - https://blog.westerndigital.com/risc-v-swerv-core-open-source/(1:29:04) ARM RISC-V FUD -https://github.com/arm-facts/arm-basics.com/blob/master/assets/img/riscv-basics.com-screenshot.jpg(1:30:16) Linus o ARM na serwerach - https://www.extremetech.com/computing/286311-linus-torvalds-claims-arm-wont-win-in-the-server-space(1:30:41) Packet.net - https://www.packet.com/(1:31:04) Amper eMAG - https://amperecomputing.com/wp-content/uploads/2019/01/eMAG8180_PB_v0.5_20180914.pdf

Software Lifecycle Stories
19: Just Be - Right Now!

Software Lifecycle Stories

Play Episode Listen Later Apr 4, 2019 48:53


Tune into tips from a techie turned transformational teacher ! Sivaguru from PM Power is very happy to share his conversation with Suresh Ramaswamy, a successful serial entrepreneur, who started with roles in IT and then pivoted to helping others discover themselves as a transformational teacher. When this conversation started, we were expecting a highly technical discussion from a VLSI architect, but what it ended up being was -- literally mind blowing! At PM Power, we have always combined the hard and soft aspects - including mindful approaches for deriving the maximum impact from the journeys of delivery excellence . This conversation touches on some of the mindfulness aspects that can make an entrepreneur or an individual contributor successful. In this conversation, Suresh shares his perspectives on  * Personal development and transformation * Keys for entrepreneurial success, including looking at entrepreneurship as going beyond an escape from corporate stress * Operating in an environment of Uncertainty and risk in a startup * The importance of having a balanced and impactful intent * Identifying core objectives for your startup * Letting go - to learn, manage stress etc * The need to be inclusive * Personal and professional relationships And a peek into his latest book, "Just be" which is about moving through and living life under an infinite guidance. Suresh Ramaswamy is the award-winning author of the personal transformation bestseller Just Be, Transform Your Life and Live as Infinity. As a transformational teacher and visionary entrepreneur, he is passionate about igniting and catalyzing the transformation of humanity. With his background as an electrical engineer and technology executive, he brings an inspired yet pragmatic approach to elevating consciousness on our planet. Held in high regard by people around the world, Suresh’s light-filled presence and guidance awakens them to their innermost essence.  Instagram: https://www.instagram.com/just.be.book/ Facebook: https://www.facebook.com/sureshramaswamy.author/ Twitter: https://twitter.com/sureshauthor https://SureshRamaswamy.org

OnTrack with Judy Warner
Design for Manufacturability (DFM) and Assembly (DFA) tips with Jay Colognori

OnTrack with Judy Warner

Play Episode Listen Later Aug 8, 2018 27:04


Get Design for Manufacturability (DFM) tips from Jay Colognori, Director of Business Development at Electronic Instrumentation & Technology (EIT). DFM and Design for Assembly (DFA) are important to engineers who know you can’t just design a PCB and throw it over the wall to manufacturing. Early and proactive optimization of all the manufacturing functions from fabrication to assembly of the final system is key. Listen to Jay and Judy discuss high-yield designs, EIT’s value-added engineering services and the latest state-of-the-art inspection technology and test capability.   Show Highlights: Jay was educated at Virginia Tech where he attained an EE Degree, followed by a Master's in Electrical Engineering at the University of Virginia. He spent  most of his career in the mid-atlantic and his career spans from board level electronic design to applications engineering doing custom microelectronics for a couple of years, eventually ending up in PCB Design first at TTM and now at EIT. EIT has been in existence for 42 years, and specialize in electronic manufacturing services, turnkey builds, box builds, and demand fulfillment and consider their Engineering value add as part of their DNA. EIT has three facilities on the East Coast, consisting of over 200,000 sq ft. They have a facility in Danville Virginia, headquarters in Leesburg and another in Salem, New Hampshire. Altogether they have eight surface mount lines. The Danville facility is designated as the low-cost center of excellence and is also a 100% vertically integrated location - it is built for box builds. Leesburg and Salem are high-tech facilities with the latest state-of-the-art universal equipment, as well as the latest and greatest inspection technology and a full suite of test capability - with a lot going on and a story that needs to get out - it’s almost been a secret! New EIT website DFM: Bare board tips 2 objectives - 1) to design so that it can be fabricated reliably and with high yields, and 2) so that it can be assembled VM Pad requires a wrap plating process to provide a reliable button around the via. This process requires more copper which can wreak havoc with fine line design, so be sure to plan upfront and move those fine line geometries to the inner layers. Overlapping via structures can’t be made. They need to be stacked and sequential, not overlapping. Sit down with your PCB Fabricator at the time of stackup development, before you even start that router and make sure everyone’s happy with the stackup. FR4 has too high a Dk for high speed designs today and new materials such as teflon or ceramic-filled laminates are becoming more common. If you work with a new material, consult with your PCB fabricator to see how the rules have changed with that new material for the speed you desire. DFA Wisdom: Common Pads - so close together that they touch, rather than routing a thin signal from pad to pad is a common problem. We don’t want them to physically share the same space which will cause loss of control over the solder flow. Keep the pads apart and just run a small solder trace between them. The via in the pad has to be filled, it must be plated over and planarized. Sometimes the planarization isn’t done properly and even a little dimple, with a BGA on top, will cause the gas trapped beneath the solder paste, to expand ferociously and blow all the solder out of the pad at reflow. Always use non-conductive filler it’s much less expensive. The benefit of using conductive fill from a thermal point is negligible and is too expensive for the return. Thermal conductivity is defined in Wattmeters - if you use a conductive fill, you only get 6 more wattmeters which is rather pointless because the copper is already doing all the work. When a thermal via is located in a big plane, with a copper button around it - the button will be in contact with the plane and this is a big no-no. It compromises the solder flow again. Do a sprocket arrangement around that thermal via button. This will create a gap between the button and plane and sprockets simply act as traces surrounding it - very good design practice, frequently missed. Especially on backplanes with active components, this will require retooling to enable manufacturability. Why has design migrated as a service inside many EMS companies? What is the value to the customer? It’s a benefit to both the customer and the EMS. We want to do more for the customer than just assemble the circuit cards. We want projects going through without a hitch, no delays. What we all want is production of electronics. Why did EIT recently choose to onboard Altium Designer internally over other tools? Firstly it’s an all-inclusive package. It’s schematic and design, we like the ECAD and MCAD interface which makes it easy to do 3D fit models. We love the room creation capability that allows you to reuse previous designs. It has very solid DFM rules capability which are set up in advance - that’s a nice piece of insurance. It’s reasonably priced compared to the other high-end tools as well. Engineers After Hours: Big hiker, especially the Rocky Mountains. We’re going to do 3 national parks this summer. Unique hobbies? Jay has been a dart player since the age of 19. Played in a couple of US opens. Pro advice: 2-3 beers is the sweet spot for optimal dart throwing performance.   Links and Resources: EIT Electronic Instrumentation & Technology Website Jay Colognori on Linkedin EIT on Linkedin About Jay Colognori AltiumLive 2018: Annual PCB Design Summit   Hey everyone, this is Judy Warner with Altium's OnTrack podcast. Thanks for joining us again. I appreciate everyone that's following, we are spreading like wildfire and we thank you for all your comments and opinions and we always look forward to hearing about things you want to hear - so reach out to us on Twitter; I'm @AltiumJudy, or you can connect with me on LinkedIn or Altium is on LinkedIn, Facebook, and Twitter. So today I have a longtime friend and ex colleague, Jay Colognori and Jay is the Director of business development at EIT which is Electronic Instrumentation and Technology in Richmond Virginia, and you're gonna have fun just listening to Jay because it's like talking to Matthew McConaughey! So you girls out there? We're gonna just have fun listening to Jay talk... Just kidding, but he does have a nice Southern drawl. So Jay, thanks so much for joining us today, and we look forward to talking to you about DFA and some technical stuff today. So thanks for joining. Thank you for having me. This is an exciting time at EIT we just added on Altium capability and so I'm delighted to get to the word out, and what better way to do it than talking to you? Well, you know when you suck up to your friends at Altium, you get on the podcast. That's how it works around here. So Jay, Why don't you start out by telling our listeners a little bit about your educational background and your professional background sort of set the stage for us? Okay sure. I picked up an EE Degree at Virginia Tech and then a Master's EE at the University of Virginia. So you could say I'm a son of Virginia for sure. I managed to spend most of my career here in the Mid-Atlantic and my career spans from board-level electronic design to integrated circuits, VLSI design, and then kind of jumped over to the other side of the table and became an Applications Engineer, doing custom microelectronics and had a few years running a rep firm making some commissions along the way, and then I ended up in the printed circuit board business working for DDI and VIASystems, now TTM, and one of my customers was a company called Zentech which was an Electronic Manufacturing services company, and I went to work for them. And now I work for EIT who is also situated in the Mid-Atlantic. I do want to correct one thing you mentioned. I live in Richmond, Virginia, but EIT is based in Leesburg, Virginia where we have two other facilities, I can talk some more about that. Alright. Thanks for correcting me there. So with all that variety of background why don't you tell us a little bit about where you are now and about EIT and what their expertise is, and what kind of technology makes they handle and so forth? Yeah. Okay, so I joined EIT back in March, very happy to be there. This is a company that's in its 42nd year of providing... Wow. Yeah - Engineering services which then led to electronic manufacturing services. So we do both; engineering is very much in our DNA, we consider our engineering value-add to be an important part of most of our customer relationships. So, you know, the thing about being in the electronic manufacturing service is that it's kind of a commodity when you look at it from the standpoint of just picking and placing parts with machines. So, we're looking to engage customers at additional levels, including engineering, turnkey builds, turnkey testing solutions, of all manner, box build if necessary, demand fulfillment, soup-to-nuts… so that we're doing more than just using those machines. And EIT has three facilities on the East Coast. Altogether we have over 200,000 square feet of brick and mortar which makes us pretty big for a small company. We have a facility in Danville, Virginia. Our headquarters is in Leesburg. And then another in Salem, New Hampshire. Altogether, I've got eight surface mount lines to keep busy. Danville is what we designated our low-cost center of excellence. It's also a 100% vertically integrated location because they can do any kind of metalwork, cabling wire, box build. We have all that in place. It's a purpose-built facility to support the box builds, which we like to do for our customers. We don't do metal stand-alone, although occasionally I'll build a heat sink or something for somebody. We tend to allocate that factory towards our customer box builds. Okay. And then Leesburg, and Salem New Hampshire are high-tech facilities, they both have the latest state-of-the-art universal equipment, so we can back each other up if something goes wrong and they both have a full suite of the latest and greatest and automated inspection technologies and a full suite of test capability. Wow that sounds impressive! It's a heck of a lot going on, and a story that needs to get out; it's kind of been a kept a secret lately so yeah... I haven't heard of them. I mean I'm on the left coast, of course, but I had not heard of them, but they sound like a really great facility with a really good… going all the way from true engineering to box builds. That's nice. So… and we'll make sure to share the link, by the way, for any of you listeners who are looking for a good EMS or engineering service or whatever. We'll be sure to share that link on the show notes. So yeah, and please do, because we're launching a new website next week, so I want to get that out. Okay. Hopefully the timing for that'll work out. Okay. Alright good. We'll send you some traffic for your new website. So Jay, because of the breadth of your knowledge and experience and background I thought it'd be great for our listeners today if you shared a few tips from, you know, being that you came from some of the largest board manufacturers and certainly in North America and almost the world - maybe three tips or so, on bare boards and then a few on DFA to help the designers in our audience, and engineers in our audience, that might want to learn a few tips and tricks from a pro? Okay, you know I came up with a few of each, you know, really when you're looking to design a printed circuit board, you have two fundamental objectives: one, to design it so that it can be fabricated reliably and with high yields. And then two; so that it can be assembled. So there's mistakes that can be made that can affect both key processes. So first of all, let's talk a little bit about PCB design issues that affect PCB fabrication and reliability and, you know, none of these I think are gonna be earth-shattering, but it's interesting to see the same mistakes being made a lot over and over again.So, we just kind of keep - we're banging the drum and we hope everybody gets the message sooner or later. So, I guess beginning with VM pad. There's more and more VM pad today, by necessity, and VM pad, in order to be done reliably, requires a wrap plating process. Without getting into specifics of what that does, what the purpose of it is, it provides a reliable button around the via. Without the wrap plating process. It's an unreliable arrangement, but that requires the addition of more copper on the outer layers than you would see otherwise, and this wreaks havoc with the fine line design. So, if you're at 3 mill tracing space or below, you really can't tolerate that extra copper; so it requires planning upfront. Understand, if you're gonna need wrap plating and if so, maybe move those fine lines geometries to the inner layers where that won't come into play. Okay, that makes sense. And it'll get you, I mean, you think your design's done and then the next thing you know, your fabricator says, well you know, you realize I'm going to add this much copper to the outside and now you're violating tracing space. I've seen this happen too when there's multiple on RF and microwave boards too and you have... when you're doing sequential LAM or whatever, and you keep plating, plating, and people don't, when they do their simulations, don't add in those extra layers are getting extra copper too. So you really can throw you off. All right. That's a good one. Here's another one that you know, I'm told we're still seeing a lot of it in the market by the guys that I used to work with at DDI, and that is, you know, they'll see overlapping via structures where the designer has put a via from say, level one and three, and another one from level two to eight; that can't be made. They have to be stacked; they have to be sequential. They can't be overlapping but believe it or not. You see it. I've seen it many many times, but you know, to be fair, sometimes when I look at those cords and figure out how they're gonna be stacked up, it… you know. I get confused too. So... Well, I mean, I'll say this again and again: sit down with your PCB fabricator at the time you develop your stack up and your basic via structure and basically your structure is going to be driven by the toughest part of the design. Maybe it's a BGA with a finer pitch than you've ever used before. You're not even sure how to route it, you're probably going to have to stack some micro vias, or at the very least, have some blind or buried vias to get the job done. Sit down with a fabricator, before you even start that router, and make sure that everybody likes the stack up and that it looks manufacturable. Yep, very sound advice. Okay, that's another good one, got another one for us? One more I want to talk about, because this is happening more and more, you know, all the designs are getting faster and faster. I mean, high speed digital is now in the radio frequency and, FR4 is just has too high a dielectric for most of the new designs now, so many PCB designers are going to have to work with materials they haven't worked with before, the more exotic, more expensive materials, and when you start talking about fabricating a PCB with Teflon versus FR4, you're talking about different processing altogether. So, when you go to a new material, consult with your PCB guy as to which material would be suitable for the speed you're looking at, and ask them, okay, how have the rules changed with that material? What are your limitations? And you can ask the EMS provider the same question, because the parent circuit board is the foundation upon which all of our business is done. So we understand PCBs, but I think especially with materials you want to talk to the fabricator. Yeah, it's true and when I worked for an RF and microwave shop once, I told them, I said: you know, sometimes when you see a piece of Teflon material and a piece of... I don't know, Rogers 4350, until you strip the coffer off. You can't tell it's different, but inside the board shop that Teflon can turn into bubble gum, it's not reinforced. But when you take the copper off and you go like this [motions] it like flaps in the wind where 43-50 will remain rigid so, kind of gives you a visual sense of, this is radically different. And the way that it processes inside the shop, and how, the way it interacts with chemicals, moisture, heat, so it is true - the closer you can be when you go into those materials, to your fabricator. Okay, those were three good ones. All right how about DFA wisdom? Okay, well one we see quite often, and I guess it's tempting for the designer to do this, because he thinks he's kind of found a shortcut and a way to use less PCB area, but you see a lot of guys trying to use what we call common pads, and these are pads that are so close together that they touch, rather than routing a thin signal from pad to pad. So these pads do share the same signal, but we don't want them to physically share the same space. That causes us problems with controlling what the solder does once it flows, so keep those pads apart, and run a just a small signal trace between them. And then we'll let the solder mask do the rest, and we can control the flow of the solder. So that's a real simple one, but we run into it a lot. Okay. I talked before about VM pad, we see a lot of designs where people don't fill that via. If the via is in the pad, it's got to be filled, and it must be plated over and planerized. Sometimes that's done properly sometimes it's not. There's a little dimple there... Yup. If there's a dimple in that pad and I place a BGA ball on top of that gas, it's gonna get trapped underneath the solder paste that I apply and there'd be a little air in that dimple and heated gas expands; it expands ferociously. It doesn't want to stay where it is and it'll blow all the solder right out of the pad at reflow time and you know I have customers argue with me against it, it is expensive to fill and planerize via, but it's the right thing to do. You have to do it if you want reliable BGA connections. These are leadless parts that we can't inspect visually, we have to use x-ray. It's not really practical to use a hundred percent x-ray inspection except on high-reliability applications like military, maybe medical. So we lot, we lot-sample these BGAs with X-ray and if we don't see any problems with a lot we carry on. So, I can't emphasize that enough, to fill those vias and fill those vias properly. And I would add further, that there's no point, you know, we have some people that are using thermal vias, these are vias really which are designed, not necessarily to conduct an electrical signal, although they do, but to conduct electricity from a hot part from maybe a ground plane, might be an inner layer or wherever, and you run into people who call out… so there has to be a filling to fill those vias before they're plated over. Right. And there's two types of filling: there's conductive and there's non-conductive. I strongly recommend never to use conductive. Non conductive is much less expensive. The benefit of using conductive fill from a thermal point of view is super minimal. I mean, the copper's doing all the work... Okay. -and if you need to draw or pull more current or more thermal energy, just create more thermal vias because the copper's doing all of the work. I mean, I'll give you some numbers: thermal conductivity is defined in wattmeters, a typical via is going to give you over 600 wattmeters of conduction, if you fill it with conductive fill you only get six more wattmeters. What's that compared to... Oh, yeah. -380 I'm sorry, 380's what the copper gives you. The non-conductive only has point six wattmeters.But the point is both of those are in the noise compared to with the copper's doing. Right. And last but not least. Good luck trying to get a printed circuit board fabricated in China with conductive fill they don't do it over there. Really? Right so well, maybe somebody's doing it but we're having a hard time finding any. Interesting hhm.Why is that? Because it's not, it's not... They just don't like it? It's not important so we're just not doing it? I don't think anybody should be doing it, it doesn't make sense to me, it's too expensive for the return. Interesting, I'd never heard that before actually, but it makes sense with those numbers. I'm kind of surprised it took root for a while. We recommended as far back as five years ago, at DDI, not to use conductive fill, I think it's a dinosaur that it's day has come and gone, but there's probably some engineers out there right now, going no! Huuu! [laughter] We'll see... Yeah well show me the data, you know, and when when there's enough good research out there and data people stop doing it. I'm sure. Okay, so along the lines of thermal vias, here's another tip. A lot of times a thermal via will be located in a big plane so you'll have the via, and the copper button around it will actually be in contact with the plane - this is a no-no. This makes it very hard to deal with the solder flow again. So what we ask our customers to do; is do a sprocket arrangement around that thermal via button and so basically that button will exist -  there'll be a gap between it and the plane around it, and then the sprockets are simply traces above, below, and the right and left and that - it's an arrangement of the sprocket - very good design practice, frequently missed and, it's not unusual that we have to go back and retool the board to add those in order to make it. Especially back planes with active components; you see a lot of that and they have to retool to add that feature to make it manufacturable. Well, these are good tips. You were concerned you didn't have good tips. I think these are really good tips actually. Well glad you do. I do, not that I spend all my days, you know, pondering DFA these days, but that's good. You know Jay, you and I worked together for a small bit of time and something I've noticed over the years I don't know... maybe five, ten years, it seems like there's been a migration of more EMS shops that go in to have PCB designers in their shop and not so much I guess, fabricators - at least that I've noticed, they may be there, of course they do with larger shops, but why do you think that is and why do you think that's a good idea? Well, I think it's a great idea for both the customer and the EMS. We, as I mentioned before, we want to do more for the customer than just assemble the circuit cards and if a customer, an Altium customer, finds themselves in a position where they need to outsource some of their design, maybe they you know, their designers are saturated, maybe they just need the resources. What better place to do it than at a guy that knows how to assemble the cards and really understands the issues about fabrication and about assembly. Your chances are that the design from your EMS provider is going to go right through new product introduction without a hitch, whereas if you do it internally and you're not aware of some of the issues you know, it won't go through it without a hitch. We may not catch the problem until it's too late. And we may see several tooling iterations, and you'll see a delay and nobody wants to delay during new product introduction - everybody's in a hurry to get their prototypes. And nobody wants to waste money because that's going to be expensive too - Yeah, but we think there's a lot of serendipity between that particular engineering function and getting to what we all want. Which is production of electronics. Well that does make sense in that, we both know Mike Brown, and Mike I trusted implicitly to know about fab and assembly and he would catch all that stuff so he did have a broader understanding than maybe, somebody who just does, you know has a consulting firm say, that does designs because he's around it all the time all, day long so there's certainly a lot of exposure there. So that makes sense. Well, first of all, welcome to the Altium family! You told me recently that your designer on-boarded Altium Designer 18, that's exciting for us. So thank you for that. What made you - I'm gonna go for a little pat on the back for Altium right here. I want you to tell us why you, why EIT chose to go Altium Designer over perhaps another tool? Okay. Well, we think some of the key features of Altium; first of all, it's an all-inclusive package so it's schematic capture and printed circuit board design. You don't have to worry about working with two different pieces of software. We like the fact that there's an e-CAD m-CAD interface which makes it really easy for us to do three-dimensional fit models once we place the components. We love the room creation capability which as I understand it, allows you to take a previous design, a piece of it, and then just kind of cut and paste it right into your new design. Yeah. So you don't have to reinvent the wheel. I think that's pretty strong and then, last but not least very solid DFM rules capability that you know, is really going to help us get to where we need to be. I mean the fact is these boards need to be designed to IPC standards and the fabricators have tailored their process to meet these standards; and when you send them something that's outside those bounds, the mechanism sort of locks up. It just doesn't work. You'll get a 'no-bid' or bells will go off, so those design rules are critical and that you have the ability to set them up in advance is a nice piece of insurance so; I think it's a reasonably priced tool compared to the other high-end tools as well. So we're pleased to have it. Good. Well, thank you again we're happy to have you on board and I'll needle you later about sending your designer to AltiumLive because we're gonna have a really good conference with some good training coming up. So your designer will probably enjoy going if you guys have the time and budget to do that. Well, keep me posted on when and where. I will it's coming up in October. So Jay, we're kind of wrapping up here, but if you've listened to these podcasts before, you'll know that sometimes I like to ask designers or engineers like yourself, what you like to do after hours, and we call this portion of the podcast designers after hours. So, I know you have a couple interesting hobbies...so why don't you tell those to our listeners because I think they're kind of fun? Well, I'm looking forward to 10 days in Montana and Wyoming this September, I'm a big hiker, and nothing is more fun than hiking up in the beautiful Rocky Mountains. So we're gonna hit three national parks; we're gonna do about two to six hours of hiking a day or as much as my legs can give me and just have just an awesome lifetime outdoor experience and hopefully not run into a grizzly bear along the way… my wife's really worried about that, by the way [laughter]. Bring your grizzly repellent! And then, I know you wanted me to talk about one other thing... My favorite... Yeah, as you know, we business development types are very competitive and what better setting to compete against each other than you know, in a pub throwing some darts. So I've been a dart player since I was 19 and took it very seriously for a while, and spent way too much time on it. Actually traveled every weekend to tournaments all around the United States and played in a couple of US Opens. But that was a long time ago, now I just play for fun on Monday nights. That cracks me up. You are the one and only competitive traveling dart player that I know... Okay, but my favorite part is - tell about the beer to success ratio of a good dart player. Oh yeah, so so we're throwing a 27 gram projectile at a target about the size of a dime and turns out, that if you get nervous or you try too hard, you're not going to be very successful with that. So, it turns out that, that second or third beer really kind of smooths out your stroke and you generally shoot a little bit better. At least that's what we rationalize. And what happens if you go over three? Yeah, that's that's a slippery slope indeed. You've got to be careful... That is so funny oh my gosh. Okay well, that is like one of my favorite, I think, designer after our hobbies yet. So when I come out, we'll find a place and throw a few. Okay. Alright, I'll get my three beers ready Okay! No, actually for me, I'd be like half a beer, like three beers you'd be putting me in an Uber and sending me home. I'm a wimp. So Jay, thanks so much for your time. These have been great tips and it's good to see your face my friend, and I wish you all the success at EIT, and we will certainly share all the links in the show notes and we'll also put the link to AltiumLive in the show notes and we'll encourage your designer to come out and join us as one of the new new beasts of the Altium family. So we'll include that as well. So thanks again for joining and we'll talk to you soon, my friend. Thank you Judy, it was my pleasure. Thanks again for listening to the podcast. This has been Judy Warner with the Altium OnTrack podcast and Jay Colognori from EIT, and we look forward to seeing you next time. Until then always stay on track.

IEEE Rebooting Computing
Episode 16: Q&A with Dan Hutcheson, CEO and Chairman of VLSI Research Inc.

IEEE Rebooting Computing

Play Episode Listen Later Jun 26, 2018 9:07


Art + Music + Technology
Podcast 146: Robert Henke

Art + Music + Technology

Play Episode Listen Later Oct 9, 2016 73:31


What can be said about Robert Henke that hasn't already been said a thousand times? A tireless inventor, music producer, visual artist and programmer, Robert has been at the front of so much - and for me he's been a constant inspiration. He's also become a good friend over the years, and I can believe it's taken me this long to interview him for the podcast. But I always want to be careful about his time; luckily, he's at a good point for a chat, and you get to listen in! In this talk, we go over Robert's ideas about music gear, collaboration (he's worked with some amazing people...), balancing different types of work, and choosing areas to explore. He also reveals himself to be an "obsessed pragmatic": he's has a love for detail, but he has to fight his inner voices to make sure that he produces work. Who can't understand that? So please enjoy this talk, and if you get a chance, give a listen to the latest Monolake release: VLSI. It's a great combo of analog, digital and hybrid, and makes for some inspirational listening.

monolake vlsi robert henke
Watson School of Engineering and Applied Science
Watson Electrical and Computer Engineering Research Video Professor Twigg

Watson School of Engineering and Applied Science

Play Episode Listen Later May 9, 2011 3:24


 Dr. Twigg's research group works in the broad area of analog and mixed-signal VLSI with an emphasis on reconfigurable and programmable IC design.  Current research involves large-scale field-programmable analog arrays (FPAAs), rank modulation flash memories, capacitively coupled microphone interfaces and feedback controllers, and genetic algorithm-based analog system synthesis.

Graphics Architecture, Winter 2009
VLSI Trends: Why Graphics Hardware Is Fast

Graphics Architecture, Winter 2009

Play Episode Listen Later Jan 27, 2009 76:03


In this lecture we turn to the technology fundamentals behind the rise of the GPU: what are the technology trends of today's VLSI designs and how and why do they impact the GPU and its architecture? We also contrast CPUs and GPUs as well as the differences between task-parallel and time-multiplexed architectures.

Tech Talk Radio Podcast
August 23, 2008 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Aug 23, 2008 57:47


File backup for clean install, Profiles in IT (Carver Mead and Lynn Conway, creators of the VLSI design revolution), significance of Mead-Conway revolution, how Obama used the web (Blue State Digital, former Deaniacs, MyBo), VP announced via text message, website of the week (Plus Magazine, Internet magazine about the beauty and importance of math), Microsoft marketing initiative (Jerry Seinfeld and Bill Gates try to make Vista look good), importance of Gmail SSL feature (session highjacking, unencrypted cookies, man-in-the-middle attack, proper use of SSL), Pandora makes last stand (royalty fees take 70 percent of revenue, make have to close doors), commercial CD turns 26 (first produced by Sony in 1982), and Food Science (types of potatoes, effect of starch, cooking methods, taste, texture). This show originally aired on Saturday, August 23, 2008, at 9:00 AM EST on 3WT Radio (WWWT).

Tech Talk Radio Podcast
August 23, 2008 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Aug 23, 2008 57:47


File backup for clean install, Profiles in IT (Carver Mead and Lynn Conway, creators of the VLSI design revolution), significance of Mead-Conway revolution, how Obama used the web (Blue State Digital, former Deaniacs, MyBo), VP announced via text message, website of the week (Plus Magazine, Internet magazine about the beauty and importance of math), Microsoft marketing initiative (Jerry Seinfeld and Bill Gates try to make Vista look good), importance of Gmail SSL feature (session highjacking, unencrypted cookies, man-in-the-middle attack, proper use of SSL), Pandora makes last stand (royalty fees take 70 percent of revenue, make have to close doors), commercial CD turns 26 (first produced by Sony in 1982), and Food Science (types of potatoes, effect of starch, cooking methods, taste, texture). This show originally aired on Saturday, August 23, 2008, at 9:00 AM EST on 3WT Radio (WWWT).

SD Fundamentos VLSI con Cadence Design Framework II
Fundamentos VLSI con Cadence DFII: Introducción al Diseño Físico

SD Fundamentos VLSI con Cadence Design Framework II

Play Episode Listen Later Dec 31, 1969 39:38


HD Fundamentos VLSI con Cadence Design Framework II
Fundamentos VLSI con Cadence DFII: Simulación Básica del Diseño Eléctrico

HD Fundamentos VLSI con Cadence Design Framework II

Play Episode Listen Later Dec 31, 1969 22:28


HD Fundamentos VLSI con Cadence Design Framework II
Fundamentos VLSI con Cadence DFII: Introducción al Diseño Físico

HD Fundamentos VLSI con Cadence Design Framework II

Play Episode Listen Later Dec 31, 1969 39:38


SD Fundamentos VLSI con Cadence Design Framework II
Fundamentos VLSI con Cadence DFII: Simulación Básica del Diseño Eléctrico

SD Fundamentos VLSI con Cadence Design Framework II

Play Episode Listen Later Dec 31, 1969 22:28