Podcasts about wvi

  • 14PODCASTS
  • 16EPISODES
  • 29mAVG DURATION
  • ?INFREQUENT EPISODES
  • Oct 6, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about wvi

Latest podcast episodes about wvi

Composer of the Week
José Maurício Nunes Garcia (1767-1830)

Composer of the Week

Play Episode Listen Later Oct 6, 2023 87:23


Kate Molleson explore the life and music of Afro-Brazilian composer José Maurício Nunes Garcia Composer of the Week shines the spotlight on the Afro-Brazilian composer José Maurício Nunes Garcia. Hailed by some as the Father of Brazilian Classical Music, and compared by others to Mozart and Haydn, this series delves into the life and music of this once hugely prolific and popular composer. Born in Rio de Janeiro, both his parents were children of slaves. Thanks to his exceptional musical talents, Garcia was able to move from his poverty-stricken beginnings to the very top of his society. He became Master of Music at the Cathedral. Later, when the Portuguese Court established themselves in the city, Garcia was appointed Master of Music at the Chapel Royal and Court Composer. Kate Molleson is joined by Professor Marcelo Hazan from the University of South Carolina and Professor Kirsten Schultz from Seton Hall University who help her explore Garcia's incredible life story and music. A hugely influential teacher of music from early on, Garcia established his own free music school and was invited into the homes of the elite to teach their daughters. His trajectory wasn't always plain sailing however and he frequently encountered racism. When it came to Garcia entering the Priesthood in the early 1790s, he had to undergo a number of tests to prove his worth, including providing impeccable references to offset the official concerns about his family background. Garcia was ordained, and with his musical skills finally recognised by the Church and Portuguese Court, he became the go-to composer for Saints Days, Royal occasions, and other commissions. However, many European musicians who came to Rio de Janeiro were not keen to be conducted by someone of his race. Eventually, Brazil gained independence from the Portuguese Empire and Garcia's Royal employers were returned to Portugal, leaving Garcia struggling during turbulent times. Music Featured: Missa pastoril para a noite de natal (Kyrie eleison) Tenuisti manum dexteram meam Missa pastoril para a noite de natal (excerpt) Fantasy No 1 Fantasy No 2 Lição No 7 da Segunda Parte Tota Pulchra es Maria Zemira, Overture Immutemur Habitu Sinfonia fúnebre Tenuisti Manum Crux Fidelis Popule Meus Francisco Manuel da Silva: Brazilian National Anthem Fantasy No 6 Requiem Mass (excerpt) Dies Sanctificatus Justus cum ceciderit Judas Mercator pessimus Missa pastoril para a noite de natal (excerpt) Overture in D major Marcos António Portugal: Cuidados, tristes cuidados Beijo a mão que me condena Laudate pueri In Monte Oliveti Josef Haydn: Piano Sonata No. 62 in E flat, Hob. WVI: 52 (Finale) Lição No 8 da Primeira Parte Lição No 4 da Segunda Parte Lição No 8 da Segunda Parte Laudate dominum Requiem Mass (excerpt) Creed No 9 in B flat (excerpt) Fantasy No 4 Missa de Nossa Senhora da Concição (excerpt) Lição No 3 da Segunda Parte Lição No 6 da Segunda Parte Requiem Mass (excerpt) Domine Tu Mihi Lavas Pedes Inter Vestibulum Presented by Kate Molleson Produced by Luke Whitlock for BBC Audio Wales and West For full track listings, including artist and recording details, and to listen to the pieces featured in full (for 30 days after broadcast) head to the series page for José Maurício Nunes Garcia (1767-1830) https://www.bbc.co.uk/programmes/m001qvv7 And you can delve into the A-Z of all the composers we've featured on Composer of the Week here: http://www.bbc.co.uk/programmes/articles/3cjHdZlXwL7W41XGB77X3S0/composers-a-to-z

The Nonlinear Library
LW - How does GPT-3 spend its 175B parameters? by Robert AIZI

The Nonlinear Library

Play Episode Listen Later Jan 15, 2023 13:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does GPT-3 spend its 175B parameters?, published by Robert AIZI on January 13, 2023 on LessWrong. [Target audience: Me from a week ago, and people who have some understanding of ML but want to understand transformers better on a technical level.] Free advice for people learning new skills: ask yourself random questions. In answering them, you'll strengthen your understanding and find out what you really understand and what's actually useful. And some day, if you ask yourself a question that no one has asked before, that's a publication waiting to happen! So as I was reading up on transformers, I got fixated on this question: where are the 175 billion parameters in the architecture? Not in the literal sense (the parameters are in the computer), but how are they “spent” between various parts of the architecture - the attention heads vs feed-forward networks, for instance. And how can one calculate the number of parameters from the architecture's “size hyperparameters” like dimensionality and number of layers? The goal of this post is to answer those questions, and make sense of this nice table from the GPT-3 paper, deriving the nparams column from the other columns. Primary Sources Lots of resources about transformers conjure information from thin air, and I want to avoid that, so I'm showing all my work here. These are the relevant parts of the sources we'll draw from: Three more details we'll use, all from Section 2.1 of the GPT-3 paper: The vocabulary size is [nvocab=]50257 tokens (via a reference to Section 2.3 of the GPT-2 paper) The feed-forward networks are all a single layer which is “four times the size of the bottleneck layer”, so dff=4dmodel “All models use a context window of nctx=2048 tokens.” Variable abbreviations I'll use shorthand for the model size variables to increase legibility: nlayers=xdmodel=ynheads=zdhead=wnvocab=vnctx=u Where are the Parameters? From Exhibit A, we can see that the original 1-hot encoding of tokens U is first converted to the initial “residual stream” h0, then passed through transformer blocks (shown in Exhibits B-D), with nlayers blocks total. We'll break down parameter usage by stage: Word Embedding Parameters We is the word embedding matrix. Converts the shape (nctx, nvocab) matrix U into a (nctx,dmodel) matrix, so We has size (nvocab,dmodel), resulting in vy=nvocabdmodel parameters. Position Embedding Parameters Wp is the position embedding matrix. Unlike the original transformer paper, GPT learns its position embeddings. Wp is the same size as the residual stream, (nctx,dmodel), resulting in uy = nctxdmodel parameters Transformer Parameters - Attention The attention sublayer of the transformer is one half of the basic transformer block (Exhibit B). As shown in Exhibit C, each attention head in each layer is parameterized by 3 matrices, WQi,WKi,WVi, with one additional matrix WO per layer which combines the attention heads. What Exhibit C calls dk and dv are both what GPT calls dhead, so WQi,WKi, and WVi are all size (dmodel,dhead). Thus each attention head contributes 3dmodeldhead parameters. What Exhibit C calls h is what GPT calls nheads, so WO is size (nheads∗dhead,dmodel) and therefore contributes nheadsdheaddmodel parameters. Total parameters per layer: For a single layer, there are nheads attention heads, so the WQi,WKi, and WVi matrices contribute 3dmodeldheadnheads parameters, plus an additional nheadsdheaddmodel parameters from WO, for a total of 4dmodeldheadnheads Total parameters: 4xyzw=4dmodeldheadnheadsnlayers Transformer Parameters - FFN The “feed-forward network” (FFN) is the other half of the basic transformer block (Exhibit B). Exhibit D shows that it consists of a linear transform parameterized by W1 and b1, an activation function, and then another linear transform parameterized by W2 and b2, as one m...

The Nonlinear Library: LessWrong
LW - How does GPT-3 spend its 175B parameters? by Robert AIZI

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 15, 2023 13:57


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does GPT-3 spend its 175B parameters?, published by Robert AIZI on January 13, 2023 on LessWrong. [Target audience: Me from a week ago, and people who have some understanding of ML but want to understand transformers better on a technical level.] Free advice for people learning new skills: ask yourself random questions. In answering them, you'll strengthen your understanding and find out what you really understand and what's actually useful. And some day, if you ask yourself a question that no one has asked before, that's a publication waiting to happen! So as I was reading up on transformers, I got fixated on this question: where are the 175 billion parameters in the architecture? Not in the literal sense (the parameters are in the computer), but how are they “spent” between various parts of the architecture - the attention heads vs feed-forward networks, for instance. And how can one calculate the number of parameters from the architecture's “size hyperparameters” like dimensionality and number of layers? The goal of this post is to answer those questions, and make sense of this nice table from the GPT-3 paper, deriving the nparams column from the other columns. Primary Sources Lots of resources about transformers conjure information from thin air, and I want to avoid that, so I'm showing all my work here. These are the relevant parts of the sources we'll draw from: Three more details we'll use, all from Section 2.1 of the GPT-3 paper: The vocabulary size is [nvocab=]50257 tokens (via a reference to Section 2.3 of the GPT-2 paper) The feed-forward networks are all a single layer which is “four times the size of the bottleneck layer”, so dff=4dmodel “All models use a context window of nctx=2048 tokens.” Variable abbreviations I'll use shorthand for the model size variables to increase legibility: nlayers=xdmodel=ynheads=zdhead=wnvocab=vnctx=u Where are the Parameters? From Exhibit A, we can see that the original 1-hot encoding of tokens U is first converted to the initial “residual stream” h0, then passed through transformer blocks (shown in Exhibits B-D), with nlayers blocks total. We'll break down parameter usage by stage: Word Embedding Parameters We is the word embedding matrix. Converts the shape (nctx, nvocab) matrix U into a (nctx,dmodel) matrix, so We has size (nvocab,dmodel), resulting in vy=nvocabdmodel parameters. Position Embedding Parameters Wp is the position embedding matrix. Unlike the original transformer paper, GPT learns its position embeddings. Wp is the same size as the residual stream, (nctx,dmodel), resulting in uy = nctxdmodel parameters Transformer Parameters - Attention The attention sublayer of the transformer is one half of the basic transformer block (Exhibit B). As shown in Exhibit C, each attention head in each layer is parameterized by 3 matrices, WQi,WKi,WVi, with one additional matrix WO per layer which combines the attention heads. What Exhibit C calls dk and dv are both what GPT calls dhead, so WQi,WKi, and WVi are all size (dmodel,dhead). Thus each attention head contributes 3dmodeldhead parameters. What Exhibit C calls h is what GPT calls nheads, so WO is size (nheads∗dhead,dmodel) and therefore contributes nheadsdheaddmodel parameters. Total parameters per layer: For a single layer, there are nheads attention heads, so the WQi,WKi, and WVi matrices contribute 3dmodeldheadnheads parameters, plus an additional nheadsdheaddmodel parameters from WO, for a total of 4dmodeldheadnheads Total parameters: 4xyzw=4dmodeldheadnheadsnlayers Transformer Parameters - FFN The “feed-forward network” (FFN) is the other half of the basic transformer block (Exhibit B). Exhibit D shows that it consists of a linear transform parameterized by W1 and b1, an activation function, and then another linear transform parameterized by W2 and b2, as one m...

The Nonlinear Library: LessWrong Daily
LW - How does GPT-3 spend its 175B parameters? by Robert AIZI

The Nonlinear Library: LessWrong Daily

Play Episode Listen Later Jan 15, 2023 13:57


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How does GPT-3 spend its 175B parameters?, published by Robert AIZI on January 13, 2023 on LessWrong. [Target audience: Me from a week ago, and people who have some understanding of ML but want to understand transformers better on a technical level.] Free advice for people learning new skills: ask yourself random questions. In answering them, you'll strengthen your understanding and find out what you really understand and what's actually useful. And some day, if you ask yourself a question that no one has asked before, that's a publication waiting to happen! So as I was reading up on transformers, I got fixated on this question: where are the 175 billion parameters in the architecture? Not in the literal sense (the parameters are in the computer), but how are they “spent” between various parts of the architecture - the attention heads vs feed-forward networks, for instance. And how can one calculate the number of parameters from the architecture's “size hyperparameters” like dimensionality and number of layers? The goal of this post is to answer those questions, and make sense of this nice table from the GPT-3 paper, deriving the nparams column from the other columns. Primary Sources Lots of resources about transformers conjure information from thin air, and I want to avoid that, so I'm showing all my work here. These are the relevant parts of the sources we'll draw from: Three more details we'll use, all from Section 2.1 of the GPT-3 paper: The vocabulary size is [nvocab=]50257 tokens (via a reference to Section 2.3 of the GPT-2 paper) The feed-forward networks are all a single layer which is “four times the size of the bottleneck layer”, so dff=4dmodel “All models use a context window of nctx=2048 tokens.” Variable abbreviations I'll use shorthand for the model size variables to increase legibility: nlayers=xdmodel=ynheads=zdhead=wnvocab=vnctx=u Where are the Parameters? From Exhibit A, we can see that the original 1-hot encoding of tokens U is first converted to the initial “residual stream” h0, then passed through transformer blocks (shown in Exhibits B-D), with nlayers blocks total. We'll break down parameter usage by stage: Word Embedding Parameters We is the word embedding matrix. Converts the shape (nctx, nvocab) matrix U into a (nctx,dmodel) matrix, so We has size (nvocab,dmodel), resulting in vy=nvocabdmodel parameters. Position Embedding Parameters Wp is the position embedding matrix. Unlike the original transformer paper, GPT learns its position embeddings. Wp is the same size as the residual stream, (nctx,dmodel), resulting in uy = nctxdmodel parameters Transformer Parameters - Attention The attention sublayer of the transformer is one half of the basic transformer block (Exhibit B). As shown in Exhibit C, each attention head in each layer is parameterized by 3 matrices, WQi,WKi,WVi, with one additional matrix WO per layer which combines the attention heads. What Exhibit C calls dk and dv are both what GPT calls dhead, so WQi,WKi, and WVi are all size (dmodel,dhead). Thus each attention head contributes 3dmodeldhead parameters. What Exhibit C calls h is what GPT calls nheads, so WO is size (nheads∗dhead,dmodel) and therefore contributes nheadsdheaddmodel parameters. Total parameters per layer: For a single layer, there are nheads attention heads, so the WQi,WKi, and WVi matrices contribute 3dmodeldheadnheads parameters, plus an additional nheadsdheaddmodel parameters from WO, for a total of 4dmodeldheadnheads Total parameters: 4xyzw=4dmodeldheadnheadsnlayers Transformer Parameters - FFN The “feed-forward network” (FFN) is the other half of the basic transformer block (Exhibit B). Exhibit D shows that it consists of a linear transform parameterized by W1 and b1, an activation function, and then another linear transform parameterized by W2 and b2, as one m...

The Underdog Vet Podcast
Episode 3 - Olivia Walter - 'Wildlife Vets International'

The Underdog Vet Podcast

Play Episode Listen Later Apr 2, 2022 56:43


Welcome to this episode of The Underdog Vet Podcast! In this episode's Animal Advocate Interview, I chatted with Olivia Walter a conservation biologist and the Executive Director of Wildlife Vets International. WVI is a British charity which has been providing critical veterinary support to international wildlife and conservation projects since 2004. There is a vital medical aspect conservation which is all too often forgotten and so WVI sponsor top veterinary specialists to help conservationists and local vets work to save endangered species worldwide. Olivia and I talked about why some species are in real danger of becoming extinct and how WVI vets are helping to save them. We also discussed the wider difficulties of wildlife conservation and why humans are both the problem and the solution. And Olivia told me all about the time she accompanied an anti-poaching patrol in Sumatra, where WVI are working to save tigers, she experienced first-hand the very real threats to their survival. Olivia used quite a few terms in our chat that some people may not be familiar with, so to save you looking them up I've explained them here: Deleterious means a negative effect that would cause harm or damage. Contiguous means the 2 populations of Leopard Olivia referred to are of the same origin or genetically related. Genome means the whole set of genes present in a individual. CITES Certificate is a certificate issued by a country authorising the movement of whole or samples of endangered species across International borders. Mesocarnivore is a animal whose diet consists of 50-70% meat. Links: WVI Website: https://www.wildlifevetsinternational.org Winning photograph of Leopard: https://www.wildlifevetsinternational.org/winners-gallery How you can help: https://www.wildlifevetsinternational.org/donate --- Send in a voice message: https://anchor.fm/the-underdog-podcast/message

Rising To Respond
Episode 4: When there's trouble, we have to come together

Rising To Respond

Play Episode Listen Later Jul 29, 2020 14:54


Millions of people around the world look to faith actors over health officials for guidance on what to do during crises, like the global pandemic. This episode explores how faith leaders are spreading awareness and mobilising communities around the world. Hear from Christian and Muslim faith leaders in Sierra Leone, World Vision's partnership lead for faith and development and UK-based Christian thought leader, Krish Kandiah.

Rising To Respond
Episode 3: Not just a global health crisis

Rising To Respond

Play Episode Listen Later Jul 15, 2020 15:11


COVID-19 has called for some extreme measures, like closing borders and shutting down local markets. But the ripple effect is impacting businesses and livelihoods in a catastrophic way. This episode dives into what that looks like around the world and what can be done to help.  Hear from World Vision leaders, Maria Helena Semedo (Deputy Director-General of the UN's Food and Agriculture Organization) and a local business owner in Zambia.

EuFMD
The Importance of training on Risk assessment and risk mapping on TADs (Dr. Hanan Yousif)

EuFMD

Play Episode Listen Later Jan 7, 2020 4:22


During the CIRAD Qualitative Risk mapping & Optimization of national monitoring systems meeting held on 15-19 of July 2019, in Rome at the FAO headquarters, we interviewed Dr Hanan Yousif on the Importance of training on Risk assessment and risk mapping on Transboundary animal diseases. Dr Hanan Yousif obtained her Master Degree at the University of Khartoum, Sudan in veterinary Epidemiology with a special focus on Rinderpest disease in Sudan. She later completed her PhD studies at the University of Khartoum, in veterinary Epidemiology working on FMD in Sudan and is currently working for the Ministry of Animal Resources and Fisheries, Sudan as senior veterinary epidemiologist and as Director of the Disease Control Department. Dr Yousif supervises the implementation of the zoosanitary measures to control Transboundary Animal Diseases and other diseases in the country and works on the capacity building in six countries in Africa and in Middle-East and at the national level in 18 states by strengthening existing capacities on Disease surveillance, risk assessment and risk mapping and in the use of the livestock emergency guidelines and standards in emergency situations, in collaboration with the Arab Organization of Agriculture Development (AOAD), UN -FAO and INGOs-ICRC and WVI.

Aviation Story
Avstry 1 - Gryphon McArthur

Aviation Story

Play Episode Listen Later May 8, 2012 38:17


Gryphon McArthur from Ocean Air Flight Services, discusses what it takes to get a pilot's license, Light sport aircraft, and what you can do after you have your license.

Chapel 1983 - 1984
02-17-84 Mr. Dean Hirsh

Chapel 1983 - 1984

Play Episode Listen Later Oct 26, 2011 30:25


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.

Chapel 1993 - 1994
11-10-93 Dr. Hirsh ~Global Missions

Chapel 1993 - 1994

Play Episode Listen Later Jul 11, 2011 33:59


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.

Chapel 1996 - 1997
10-30-96 Dr. Hirsch ~Being a Witness for Christ

Chapel 1996 - 1997

Play Episode Listen Later Jun 23, 2011 34:50


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.

Chapel 1998 - 1999
10-21-98 Dean Hirsch

Chapel 1998 - 1999

Play Episode Listen Later Jun 10, 2011 32:14


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.

Chapel 1999 - 2000
10-20-99 Dean Hirsch

Chapel 1999 - 2000

Play Episode Listen Later Jun 7, 2011 29:17


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.

Chapel 2000 - 2001
10-23-00 Dean Hirsch

Chapel 2000 - 2001

Play Episode Listen Later Jun 1, 2011 22:38


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.

Chapel 2000 - 2001
10-25-00 Dean Hirsch

Chapel 2000 - 2001

Play Episode Listen Later Jun 1, 2011 25:07


Dean Hirsch is the global ambassador for World Vision. Hirsch who was WVI president from 1996 through September 2009, is now charged with representing the organisation at various meetings, events and other engagements. He represents the president and the partnership on internal boards and with UN agencies, governments, non-governmental organizations (NGOs) and major donors.Hirsch holds a master of science degree from Indiana State University and a bachelor of arts and an honorary doctorate from Westmont College in Santa Barbara, California. He was also awarded honorary doctorates from Pepperdine University (2006), Eastern University of Pennsylvania (2001) and Myongji University in Seoul, Korea (1999). Prior to his appointment as WVI president in 1996, Hirsch served as chief operating officer, vice president for development and vice president for relief operations. He joined World Vision in 1976 as manager of computer operations.