Podcasts about natural language generation

  • 34PODCASTS
  • 41EPISODES
  • 39mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Feb 9, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about natural language generation

Latest podcast episodes about natural language generation

Agile Innovation Leaders
E048 Bala Madhusoodhanan on Critical Considerations for Leaders when Adopting AI Solutions

Agile Innovation Leaders

Play Episode Listen Later Feb 9, 2025 37:58


Bio   Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society.   Interview Highlights 02:00 Mentors and peers 04:00 Community bus 07:10 Defining AI 08:20 Contextual awareness 11:45 GenAI 14:30 The human loop 17:30 Natural Language Processing 20:45 Sentiment analysis 24:00 Implementing AI solutions 26:30 Ethics and AI 27:30 Biased algorithms 32:00 EU AI Act  33:00 Responsible use of technology   Connect   Bala Madhusoodhanan on LinkedIn     Books and references   ·       https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html  - NLP   ·       https://www.theregister.com/2021/05/27/clearview_europe/  - Facial Technology Issue   ·       https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet  - Apple Card story   ·       https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3  - Data Centre growth   ·       https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/   ·       Independent Audit of AI Systems -   ·       Home | The Alan Turing Institute   ·       Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani   ·       AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee   ·       The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh   ·       Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson   ·       Superintelligence: Paths, Dangers, Strategies, Nick Bostrom   ·       The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian   ·       Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman   ·       Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere   ·       The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al  Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener. Ula Ojiaku So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time. Bala Madhusoodhanan It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation. Ula Ojiaku Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now? Bala Madhusoodhanan It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing. Ula Ojiaku What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way? Bala Madhusoodhanan The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about. Ula Ojiaku So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests? Bala Madhusoodhanan If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups. Ula Ojiaku Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently? Bala Madhusoodhanan I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that. Ula Ojiaku So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence? Bala Madhusoodhanan I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI. Ula Ojiaku You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going? Bala Madhusoodhanan Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years. Ula Ojiaku Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence? Bala Madhusoodhanan The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know. Ula Ojiaku And that's why you said the importance of having a human in the loop. Bala Madhusoodhanan Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that. Ula Ojiaku So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction. Bala Madhusoodhanan So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right? So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula. Ula Ojiaku At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing. Bala Madhusoodhanan Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable. Ula Ojiaku What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider? Bala Madhusoodhanan Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling. Ula Ojiaku What are the key things about ethics and AI, do you think leaders should be aware of at this point in time? Bala Madhusoodhanan So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders. Ula Ojiaku It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation. Bala Madhusoodhanan So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart. Ula Ojiaku What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why? Bala Madhusoodhanan Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back. Ula Ojiaku And if the audience wants to find you, how can they reach out to you? Bala Madhusoodhanan They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn. Ula Ojiaku Awesome. And do you have any final words and or ask of the audience? Bala Madhusoodhanan The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have. Ula Ojiaku It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast. Bala Madhusoodhanan Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues. Ula Ojiaku That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!   

Smart Software with SmartLogic
Creating the Standd AI-Native Due Diligence Platform with Stephen Solka

Smart Software with SmartLogic

Play Episode Listen Later Nov 21, 2024 48:44


Stephen Solka, CTO and co-founder of Standd.io, joins Elixir Wizards Owen and Charles to share the journey of building an AI-native deal intelligence and due diligence platform. Designed to streamline document analysis and text generation for venture capital firms, Standd.io leverages large language models and AI tools to address key customer pain points in document workflows. Stephen explains how Elixir and Phoenix LiveView enabled rapid UI iteration and seamless integration between the front-end and back-end. The conversation also explores the human side of startup life. Stephen reflects on balancing tech debt with customer demands, the value of accelerators in building networks and securing funding, and the challenges of pricing in early-stage startups. He emphasizes the importance of validating ideas with potential customers and learning from the hurdles of growing a business. Tune in for insights on leveraging AI in Elixir, solving real-world problems, and navigating the journey from concept to company. Topics discussed in this episode: The journey from self-taught programmer to CTO The perks of Phoenix LiveView for rapid UI development Integrating front-end and back-end technologies AI tools for code generation How early adopters balance functionality with product polish Validating ideas and understanding customer needs The impact of accelerators on networking and fundraising Approaches to managing pricing strategies for startups Balancing technical debt with feature development The role of telemetry and error reporting in product development Creating collaborative and supportive tech communities Educating users on AI's capabilities and limitations The broader implications of AI tools across industries Links Mentioned Contact Stephen & Julie at Standd: founders@standd.io https://www.standd.io/ https://www.digitalocean.com/community/tutorials/gangs-of-four-gof-design-patterns https://www.thriftbooks.com/w/code-completesteve-mcconnell/248753/item/15057346/ https://aws.amazon.com/sagemaker/ https://www.anthropic.com/ https://getoban.pro/ https://kubernetes.io/ https://www.apollographql.com/ https://aws.amazon.com/startups/accelerators https://accelerate.techstars.com/ https://aider.chat/ https://github.com/Aider-AI/aider https://neovim.io/ https://ui.shadcn.com/ https://tailwindui.com/ https://www.ycombinator.com/ https://www.thriftbooks.com/w/close-to-the-machine-technophilia-and-its-discontentsellen-ullman/392556 Special Guest: Stephen Solka.

Smart Software with SmartLogic
Machine Learning in Elixir vs. Python, SQL, and Matlab with Katelynn Burns & Alexis Carpenter

Smart Software with SmartLogic

Play Episode Listen Later Nov 23, 2023 31:19


In this episode of Elixir Wizards, Katelynn Burns, software engineer at LaunchScout, and Alexis Carpenter, senior data scientist at cars.com, join Host Dan Ivovich to discuss machine learning with Elixir, Python, SQL, and MATLAB. They compare notes on available tools, preprocessing, working with pre-trained models, and training models for specific jobs. The discussion inspires collaboration and learning across communities while revealing the foundational aspects of ML, such as understanding data and asking the right questions to solve problems effectively. Topics discussed: Using pre-trained models in Bumblebee for Elixir projects Training models using Python and SQL The importance of data preprocessing before building models Popular tools used for machine learning in different languages Getting started with ML by picking a personal project topic of interest Resources for ML aspirants, such as online courses, tutorials, and books The potential for Elixir to train more customized models in the future Similarities between ML approaches in different languages Collaboration opportunities across programming communities Choosing the right ML approach for the problem you're trying to solve Productionalizing models like fine-tuned LLM's The need for hands-on practice for learning ML skills Continued maturation of tools like Bumblebee in Elixir Katelynn's upcoming CodeBeam talk on advanced motion tracking Links mentioned in this episode https://launchscout.com/ https://www.cars.com/ Genetic Algorithms in Elixir (https://pragprog.com/titles/smgaelixir/genetic-algorithms-in-elixir/) by Sean Moriarity Machine Learning in Elixir (https://pragprog.com/titles/smelixir/machine-learning-in-elixir/) by Sean Moriarity https://github.com/elixir-nx/bumblebee https://github.com/huggingface https://www.docker.com/products/docker-hub/ Programming with MATLAB (https://www.mathworks.com/products/matlab/programming-with-matlab.html) https://elixirforum.com/ https://pypi.org/project/pyspark/  Machine Learning Course (https://online.stanford.edu/courses/cs229-machine-learning) from Stanford School of Engineering Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/) by Aurélien Géron Data Science for Business (https://data-science-for-biz.com/) by Foster Provost & Tom Fawcett https://medium.com/@carscomtech  https://github.com/k-burns  Code Beam America (https://codebeamamerica.com/) March, 2024 Special Guests: Alexis Carpenter and Katelynn Burns.

ApfelNerds – Apple News, Gerüchte, Technik

In Folge 151 sprechen die ApfelNerds darüber, dass Apple den Gürtel enger schnallt, es gibt eine zweite Entlassungsrunde bei Amazon, (keine) News zu Spotify HiFi, Microsoft plant einen eigenen App Store auf Apple-Hardware, es gibt kleine Details zu Apples "AI Summit", Apple arbeitet an einem „Natural Language Generation“-Feature für Siri, realityOS lebt, das iPhone 15 Pro hat keine mechanischen Tasten mehr, Apple verdoppelt die Bestellung der 20 Watt USB-C-Netzteile, der Release Candidate von iOS 16.4 ist da und Clippy kehr zurück!

How AI Built This
#78 - Ross Turner, CPO Arria - Natural Language Generation

How AI Built This

Play Episode Listen Later Feb 28, 2023 44:31


Welcome back to How AI Built This, the show dedicated to data and entrepreneurial story telling. This episode I spoke to Ross Turner, The Chief Product Officer at Natural Language Generation specialists Arria. Ross is Berlin based, working remotely for a now global team at Arria, where they are at the cutting edge of NLG. We had a great chat about his journey, NLG, data in general and of course, some non-data related content, this time in the form of Mixed Martial Arts (MMA)! I hope you enjoy! As always, we're brought to you by the wonderful people at Cathcart Technology, recruitment experts & Infer who are building the next generation of analytics. Music by Noisyfilter from Fugue

Yannic Kilcher Videos (Audio Only)
Typical Decoding for Natural Language Generation (Get more human-like outputs from language models!)

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Mar 28, 2022 48:55


#deeplearning #nlp #sampling Modern language models like T5 or GPT-3 achieve remarkably low perplexities on both training and validation data, yet when sampling from their output distributions, the generated text often seems dull and uninteresting. Various workarounds have been proposed, such as top-k sampling and nucleus sampling, but while these manage to somewhat improve the generated samples, they are hacky and unfounded. This paper introduces typical sampling, a new decoding method that is principled, effective, and can be implemented efficiently. Typical sampling turns away from sampling purely based on likelihood and explicitly finds a trade-off between generating high-probability samples and generating high-information samples. The paper connects typical sampling to psycholinguistic theories on human speech generation, and shows experimentally that typical sampling achieves much more diverse and interesting results than any of the current methods. Sponsor: Fully Connected by Weights & Biases https://wandb.ai/fully-connected OUTLINE: 0:00 - Intro 1:50 - Sponsor: Fully Connected by Weights & Biases 4:10 - Paper Overview 7:40 - What's the problem with sampling? 11:45 - Beam Search: The good and the bad 14:10 - Top-k and Nucleus Sampling 16:20 - Why the most likely things might not be the best 21:30 - The expected information content of the next word 25:00 - How to trade off information and likelihood 31:25 - Connections to information theory and psycholinguistics 36:40 - Introducing Typical Sampling 43:00 - Experimental Evaluation 44:40 - My thoughts on this paper Paper: https://arxiv.org/abs/2202.00666 Code: https://github.com/cimeister/typical-... Abstract: Despite achieving incredibly low perplexities on myriad natural language corpora, today's language models still often underperform when used to generate text. This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language as a communication channel (à la Shannon, 1948) can provide new insights into the behaviors of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, and do so in a simultaneously efficient and error-minimizing manner; they choose each word in a string with this (perhaps subconscious) goal in mind. We propose that generation from probabilistic models should mimic this behavior. Rather than always choosing words from the high-probability region of the distribution--which have a low Shannon information content--we sample from the set of words with information content close to the conditional entropy of our model, i.e., close to the expected information content. This decision criterion can be realized through a simple and efficient implementation, which we call typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance in terms of quality while consistently reducing the number of degenerate repetitions. Authors: Clara Meister, Tiago Pimentel, Gian Wiher, Ryan Cotterell Links: Merch: store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :)

Yannic Kilcher Videos (Audio Only)
Author Interview - Typical Decoding for Natural Language Generation

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Mar 28, 2022 48:55


#deeplearning #nlp #sampling This is an interview with first author Clara Meister. Paper review video hereé https://youtu.be/_EDr3ryrT_Y Modern language models like T5 or GPT-3 achieve remarkably low perplexities on both training and validation data, yet when sampling from their output distributions, the generated text often seems dull and uninteresting. Various workarounds have been proposed, such as top-k sampling and nucleus sampling, but while these manage to somewhat improve the generated samples, they are hacky and unfounded. This paper introduces typical sampling, a new decoding method that is principled, effective, and can be implemented efficiently. Typical sampling turns away from sampling purely based on likelihood and explicitly finds a trade-off between generating high-probability samples and generating high-information samples. The paper connects typical sampling to psycholinguistic theories on human speech generation, and shows experimentally that typical sampling achieves much more diverse and interesting results than any of the current methods. Sponsor: Introduction to Graph Neural Networks Course https://www.graphneuralnets.com/p/int... OUTLINE: 0:00 - Intro 0:35 - Sponsor: Introduction to GNNs Course (link in description) 1:30 - Why does sampling matter? 5:40 - What is a "typical" message? 8:35 - How do humans communicate? 10:25 - Why don't we just sample from the model's distribution? 15:30 - What happens if we condition on the information to transmit? 17:35 - Does typical sampling really represent human outputs? 20:55 - What do the plots mean? 31:00 - Diving into the experimental results 39:15 - Are our training objectives wrong? 41:30 - Comparing typical sampling to top-k and nucleus sampling 44:50 - Explaining arbitrary engineering choices 47:20 - How can people get started with this? Paper: https://arxiv.org/abs/2202.00666 Code: https://github.com/cimeister/typical-... Abstract: Despite achieving incredibly low perplexities on myriad natural language corpora, today's language models still often underperform when used to generate text. This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language as a communication channel (à la Shannon, 1948) can provide new insights into the behaviors of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, and do so in a simultaneously efficient and error-minimizing manner; they choose each word in a string with this (perhaps subconscious) goal in mind. We propose that generation from probabilistic models should mimic this behavior. Rather than always choosing words from the high-probability region of the distribution--which have a low Shannon information content--we sample from the set of words with information content close to the conditional entropy of our model, i.e., close to the expected information content. This decision criterion can be realized through a simple and efficient implementation, which we call typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance in terms of quality while consistently reducing the number of degenerate repetitions. Authors: Clara Meister, Tiago Pimentel, Gian Wiher, Ryan Cotterell Links: Merch: store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF

Digitale Optimisten: Perspektiven aus dem Silicon Valley
#43 Dominik, wann schreibt künstliche Intelligenz einen Bestseller?

Digitale Optimisten: Perspektiven aus dem Silicon Valley

Play Episode Listen Later Mar 14, 2022 80:14


Wusstest du, dass im Jahr 2016 eine künstliche Intelligenz fast einen Literaturpreis in Japan gewonnen hat? Genau darum geht's heute: Synthetische Medien; Texte, die von Computern geschrieben werden. Wie groß ist die Business Opportunity?///Die besten Job-Angebote der ambitioniertesten Start-Ups: http://www.digitaleoptimisten.de/jobsAlle Gründergeschichten auf http://www.digitaleoptimisten.de/storiesFolge Digitale Optimisten auf Instagram: https://www.instagram.com/digitaleoptimisten//// Herzlich Willkommen bei Digitale Optimisten. Ich bin Alex, und wir sind in diesem Podcast auf der Suche nach den nächsten Elon Musks, die mit ihren Ideen die Welt verändern wollen. Damit sind sie aber noch ganz am Anfang, so dass Du urteilen kannst, ob es wirklich das next big thing ist.Wusstest du, dass im Jahr 2016 eine künstliche Intelligenz fast einen Literaturpreis in Japan gewonnen hat? Genau darum geht's heute: ein wirklich faszinierendes Thema mit einem sehr spannenden Gründer. Hast Du dich schon einmal gefragt, wer eigentlich diese ganzen Texte produziert, die du täglich liest? Artikel auf zeit.de, Produktbeschreibungen auf Amazon, eMail Newsletter, Werbetexte in Instagram-Werbung, und und und. Überall sind Texte, die produziert werden müssen um dann irgendwie gelesen zu werden. Einige Schätzungen sagen, dass jeden Tag über 8 Mrd. Euro an Wert nur durch Text geschaffen wird. Von daher ist es nicht verwunderlich, dass dieser Markt ein riesiger ist, es gibt massiv Agenturen, die tollen Content produzieren, Ghostwriter, die Abschlussarbeiten schreiben und so weiter. Und natürlich gibt es auch künstliche Intelligenz, die dabei helfen will. Das heißt Natural Language Generation, oder kurz NLG, und ist einer der Bereiche, in denen gerade wahnsinnig viel passiert.Deshalb muss ich das Thema besser verstehen, und spreche dazu mit Dominik Lambersy. Dominik hat mit TextCortex das Accelerator-Programm von Entrepreneur First entworfen, und glaubt massiv an die Chancen in diesem Bereich. Wir reden über die Größe des Marktes, vergangene und zukünftige Entwicklungen und natürlich, warum TextCortex das nächste große Dinge sein soll.Ich freue mich über 5 Sterne Reviews und dein Feedback an alexander@digitaleoptimisten.de. Bevor es mit dieser Folge losgeht: Bitte erzähle zwei Freundinnen oder Kollegen von diesem Podcast, wenn Dir diese Geschichten gefallen. Es hilft ungemein, neue Hörer zu finden und den Podcast weiter wachsen zu lassen. Wenn Du magst kannst Du mir auch einen Review auf Apple Podcasts oder Spotify geben.

How AI Happens
The Opportunity of NLG with Arria CTO Neil Burnett

How AI Happens

Play Episode Listen Later Feb 3, 2022 27:29


Arria is a Natural Language Generation company that replicates the human process of expertly analyzing and communicating data insights. We caught up with their CTO, Neil Burnett, to learn more about how Arria's technology goes beyond the standard rules-based NLP approach, as well as how the technology develops and grows once it's placed in the hands of the consumer. Neil explains the huge opportunity within NLG, and how solving for seamless language based communication between humans and machines will result in increased trust and widespread adoption in AI/ML technologies.

opportunities cto nlp burnett ai ml nlg natural language generation
AI Stories
Lyndsee Manna - Natural Language Generation #6

AI Stories

Play Episode Listen Later Dec 27, 2021 44:49


Our guest today is Lyndsee Manna, Executive Vice President in Strategic Partnerships and Global Business at Arria NLG.  Arria is a world leader in Natural Language Generation (NLG), a sub-branch of Natural Language Processing which uses algorithms to transform structured data into text. In our conversation, we first talk about Lyndsee's studies and her transition from being a musician to becoming interested in Maths and Technology. We then focus on her experience at Diligent where she brought the company from a concept to an IPO. We finally dig into the world of Natural Language Generation with Arria NLG. We explore what NLG actually is, the benefits that this technology can offer as well as the different AI algorithms used to transform large and complex datasets into text, a format which can easily be understood by humans. Throughout the conversation, Lyndsee also shares various career advice based on her many years of experience in the world of AI, technology and strategy.

Highway to Scale
What lies behind 5 exits & 50+ successful investments / Interview with Massimiliano Squillace — CEO of Contents

Highway to Scale

Play Episode Listen Later Jul 1, 2021 25:25


In today's episode I'm joined by Massimiliano who is the CEO of Contents. Massimiliano will tell us about the mindset of following his own instincts, what are investors focused on when they're looking to invest in a new company, and what does it take to successfully build and sell multiple companies.__________HOSTED BY:Dorian — Bornfight Content Creatorhttps://www.linkedin.com/in/itsdorian/dorian.derezic@bornfight.com__________GUEST:Massimiliano Squillace — Contents CEOhttps://www.linkedin.com/in/massimilianosquillace/_ABOUT MASSIMILIANO:Massimiliano Squillace is a  serial entrepreneur, he made 5 exits, one of which in the world of digital publishing: Nano Publishing sold to Excite Populis. Author of the bestseller Successful Startups. He is one of the few Italians who appeared on Forbes and Fortune. He has invested in more than 50 companies and is CEO of one of them. _ABOUT CONTENTS:https://contents.com/https://www.linkedin.com/company/contentscom/Contents is a tech marketing company that has developed a proprietary software platform that allows its users, through its advanced data analysis and machine learning algorithms, to rapidly produce digital content on various topics and in a variety of languages, without compromising quality. The platform integrates modules for the analysis of online trends and competitors, for tracking user behavior and thus powers Natural Language Processing and Natural Language Generation modules for the generation of AI-based content. _____WANT TO BE A GUEST ON THE PODCAST?Contact me on Linkedin -> https://www.linkedin.com/in/itsdorian/_____GET IN TOUCH WITH BORNFIGHT:www.facebook.com/bornfight.companywww.linkedin.com/company/bornfightwww.instagram.com/bornfightwww.bornfight.comhello@bornfight.com

In-Ear Insights from Trust Insights
{PODCAST} In-Ear Insights: Advances in AI Natural Language Generation and Marketing Implications

In-Ear Insights from Trust Insights

Play Episode Listen Later Jun 23, 2021


In this week’s In-Ear Insights, Katie and Chris talk about the newest advances in natural language generation and walk through an example of what’s available now for creating content with the assistance of AI. Watch the demonstration, listen to the implications for marketers, and start formulating your AI-based content marketing strategy. Tune in to find […]

ai marketing implications advances natural language generation
Healthy Outcomes: A Baker Tilly Podcast
Irreverent Revenue Cycle No. 4: Natural language advantages in revenue cycle management

Healthy Outcomes: A Baker Tilly Podcast

Play Episode Listen Later Jun 22, 2021 21:59


This episode discusses natural language generation (NLG) software and how utilizing tools such as Automated Insights by Wordsmith can help healthcare provider organization make actionable decisions to improve their revenue cycle performance.

Integrate & Ignite Podcast
Episode 367: AI-Generated Content Marketing with Jeff Coyle of MarketMuse

Integrate & Ignite Podcast

Play Episode Listen Later Jun 15, 2021 29:11


“Start thinking about how your content is created and stop thinking about it as a replacement.”   Listen & Learn: Why a Subject Matter Expert is the most important asset a business has. The Natural Language Generation tools that improve AI content. The difference between Natural Language Processing and Natural Language Generation. Understanding content sentiment. The power of combing sentiment, intent, and content formats. The risks of antiquated content. Jeff Coyle is a cross-disciplined, data-driven inbound marketing executive with 20+ years of experience managing products and website networks and in helping companies grow. Before he helped co-found MarketMuse, Jeff was VP of Search at TechTarget where he built a 30-person content optimization team. As MarketMuse's Chief Product Officer, Jeff leads his team of content and AI veterans in using machine learning to enable their clients to gain authority in their topical domain. Jeff is all about creating the kind of content search engines love and helping content marketers, search engine marketers, agencies, and e-commerce managers establish and build topical authority, improve content quality, and get the most out of every dollar spent on content. Jeff is also a sought-after speaker at conferences and has also advised Accel-KKR portfolio companies on lead generation.   Content marketing strategies that elevate your brand. Call LORI JONES today to learn more. 303-678-7102. TO LEARN MORE ABOUT MARKETMUSE, CLICK HERE. TO FIND JEFF COYLE ON LINKEDIN, CLICK HERE.  

Artificial Intelligence in Industry with Daniel Faggella
Streamlining Financial Reporting with AI and NLG - with Emmanuel Walckenaer of YSEOP

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later May 25, 2021 31:05


Today's guest is Emmanual Walckenaer, CEO of YSEOP, a company based in Paris and focused on Natural Language Generation. Emmanuel speaks to us today about workflows where automating reporting really delivers value, and we talk about two, in particular, one of which is credit risk. Understanding and assessing it, and what does that process look like today? What kind of data and insights are required, and where does AI fit into the mix? And we also look at improving report quality for people that are controllers or analysts. Emmanuel explores how the use cases work and how the underpinning data infrastructure comes together to create a natural language-generated report. This episode is sponsored by YSEOP. If you are interested in reaching Emerj's audience and learn more about our creative services and how brands work with Emerj, you can go to emerj.com/ad1

Wise Content Creates Wealth
Content Production with AI & MarketMuse

Wise Content Creates Wealth

Play Episode Listen Later Apr 16, 2021 47:59


Today's guest is Jeff Coyle, Co-founder and Chief Product Officer for MarketMuse. He is focused on helping content marketers, search engine marketers, agencies, and publishers build topical authority, improve content quality and turn semantic research into actionable insights.Tune in for this intelligent conversation at TalkRadio.nyc or watch the Facebook Livestream by clicking here.Show NotesSegment 1Joseph Mcelroy opens the show with a summary of the show and what will be covered on today's episode. Joseph then gives the audience an explanation of the Google Question Hub and the opportunities it gives content marketers. Question Hub was used internationally but has made its way to the United States recently, and Joseph gives the app a constructive, positive testimonial. Joseph then introduces the guest, Jeff Coyle, the Co-founder and chief product officer for marketmuse. Jeff coyle affirms Joseph's excitement for Google Question Hub, also expands on the many uses users can use the app for. Joseph gives an anecdote about the app and then segways into Jeff Coyles explanation of his company, MarketMuse. He explains MarketMuse will help content creators streamline their priorities when it comes to handling their content on all aspects. Jeff Coyle explains his goal is to help writers grow by not worrying about the minutiae that MarketMuse will help them with. Jeff Coyle tells how the acquisition of his company by TechTarget helped decide who is an expert and should be helping their creators.Segment 2Jeff Coyle gave Joseph a brief history of his career and the start of MarketMuse in September of 2015. Jeff Coyle explains the immediate growth of his company and how this growth helped him become a breakthrough in the industry. This breakthrough came with new opportunities for the company and this helped MarketMuse level up to a company with thousands of customers world wide. Jeff gives a teaser of MarketMuse that will be free launching later on this year. Jeff and Joseph trade insider opinions about the state of content and the ill opinions people may have about the effort it takes to market this content. Jeff Coyle tells the audience to understand who you're getting information about the industry from, it may make or break your career as a content creator. The ignorant opinion of some of the people claiming to know how to market people's content frustrates Jeff Coyle as a real expert in the industry. He then goes on to explain the difference between people who know what they are talking about and those who pretend and inevitably end up failing in the industry. Joseph agrees and the two share anecdotes about their career.Segment 3Jeff Coyle tells Joseph how he prioritizes what part of the process of marketing he handles first and with more attention for his clients. Jeff then talks about how he finds specific content to optimize amongst great inventory like many businesses needing a solid content marketing strategy. An example of this given by Joseph gives Jeff a great talking point of how to handle this problem directly. Jeff Coyle explains how investing into a specific point of growth may seem easy at first but there are a lot of factors his clients don't realize and what he does to help his clients in this situation. MarketMuse takes the guessing of content marketing out of the equation and instead gives you actionable plans to build off of. Jeff Coyle gives industry strategies on getting more content out the door for his clients.Segment 4Joseph brings up the show and plugs his other podcast but brings it back to Jeff Coyle to speak about Natural Language Generation. He then explains what Natural Language Generation is and how we use it everyday in google and other online platforms. Google is a great example of having a powerful language model and how this makes MarketMuse their direct competition, along with Microsoft. Jeff Coyle gives testimony to how well Natural Language Generation works on an everyday basis. Joseph gives his short term goal of creating content and Jeff tells how that's possible with MarketMuse knowledge. Joseph and Jeff close out the show with calls to actions for MarketMuse products. 

Les Généralistes
On parle d'intelligence artificelle ft. Lucas Caccia

Les Généralistes

Play Episode Listen Later Jan 14, 2021 32:48


Dans cet épisode, on reçoit Lucas Caccia, un candidat au Doctorat en intelligence artificielle et stagaire chez Facebook AI . On démystifie l'intelligence artificielle (AI/AI) en passant par le machine learning, Natural Language Generation et d'autres concepts. Quelques liens si vous voulez en apprendre plus Page Web de Lucas Page Google Scholar de Lucas Contactez-nous si vous voulez nous commanditer. Instagram: @lesgeneralistes Twitter: @LesGeneralistes Email: lesgeneralistespodcast@gmail.com

dans quelques parle ai ai caccia doctorat natural language generation
Productive AI Podcast
How can an AI understand language? Scott Leishman, XOKind | Productive AI Podcast with Troy Angrignon

Productive AI Podcast

Play Episode Listen Later Dec 16, 2020 56:15


How can an AI understand language? Computer-human communication is undergoing a revolution and AI can now listen to, understand, and speak back to us in much more powerful ways than it could before. On this episode, hear Scott Leishman discuss how AI can now write news articles, blog posts, poetry, and novels and how work done in the recent past is making it easier than ever to build incredibly powerful AI applications that can communicate with human beings. -- TIMING – 00:00 Introduction00:48 Scott’s background in computer science at FICO, Core Logic, and Nirvana Systems (which exited to Intel for $400M in 2016), and Intel06:56 What is Natural Language Processing (NLP)?11:40 What was the significance of GPT-3’s release this year?16:31 What can GPT-3 do? (explain it to somebody who doesn’t follow the field). 19:15 NLP is having its “ImageNet moment” – what does that mean? (Technical explanation)25:39 Simplifying NLP for less-technical listeners28:17 Standing on the shoulders of giants: Pre-trained models are making it easier to build AI applications30:05 What kinds of new uses cases are possible with the current state of the art NLP?33:29 Apple Knowledge Navigator – are we there yet?37:25 Where does NLP live in the AI stack?41:34 What are you doing with NLP at XOKind?49:47 What should people be doing to improve their chances of working in this space?54:05 Summary  -- LINKS -- Books: Manning & Jurafsky is sort of the best known, comprehensive but is a bit dated at this point. Fortunately they are working on a new draft: https://web.stanford.edu/~jurafsky/slp3/ Conferences: the big ones for NLP are ACL, EMNLP (was just last week), CoNLL, but you’ll also see a lot of new work at ICLR and NeurIPS Papers. The field moves quick but arXiv is the first place to find new results. I’d highly recommend searching through something like arxiv-sanity instead for a subject/topic of interest. Mailing lists: I’m a big fan of Sebastian Reuder’s monthly update, you can sign up for at NLP news https://ruder.io/nlp-news/ Sites: I mentioned https://nlpprogress.com/ to keep tabs on current state of the art for given downstream tasksFor folks that want a good practical introduction I’d recommend Stanford’s undergraduate NLP course (complete with video lectures online): http://web.stanford.edu/class/cs224n/ Getting interested in ML in general, this course is pretty good too if you have some programming experience under your belt: https://course.fast.ai/ Hugging Face are doing a lot of great work in the NLP space, they have easy integrations for various models, a solid python library etc. Rasa are another open source solution, they now have APIs too for helping build conversation agents XOKind! Sign up for our mailing list on the front page here: https://www.xokind.com/ Job openings. List is here: https://www.xokind.com/careers/ (scroll down the page). Growing Frontend and Backend engineering is a current focus for us. Apple Knowledge Navigator Video: https://www.youtube.com/watch?v=HGYFEI6uLy0

Mind Matters
Bingecast: Selmer Bringsjord on the Lovelace Test

Mind Matters

Play Episode Listen Later Sep 24, 2020 92:39


The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turing’s proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? What are the capabilities and limitations of AI? Robert J. Marks and Dr. Selmer Bringsjord discuss… Source

Mind Matters
Bingecast: Selmer Bringsjord on the Lovelace Test

Mind Matters

Play Episode Listen Later Sep 24, 2020 92:39


The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turing’s proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? What are the capabilities and limitations of AI? Robert J. Marks and Dr. Selmer Bringsjord discuss… Source

Machine Learning Engineered
Catherine Yeo: Fairness in AI and Algorithms

Machine Learning Engineered

Play Episode Listen Later Sep 23, 2020 63:27


http://catherineyeo.tech/ (Catherine Yeo) is a Harvard undergrad studying Computer Science. She's previously worked for Apple, IBM, and MIT CSAIL in AI research and engineering roles. She writes about machine learning in Towards Data Science and in her new publication http://fairbytes.org/ (Fair Bytes.) Learn more about Catherine: http://catherineyeo.tech/ (http://catherineyeo.tech/) Read Fair Bytes: http://fairbytes.org/ (http://fairbytes.org/) Want to level-up your skills in machine learning and software engineering? Subscribe to our newsletter: https://mlengineered.ck.page/943aa3fd46 (https://mlengineered.ck.page/943aa3fd46) Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/ (https://www.givingwhatwecan.org/) Subscribe to ML Engineered: https://mlengineered.com/listen (https://mlengineered.com/listen) Follow Charlie on Twitter: https://twitter.com/CharlieYouAI (https://twitter.com/CharlieYouAI) Timestamps: (02:48) How she was first exposed to CS and ML (07:06) Teaching a high school class on AI fairness (10:12) Definition of AI fairness (16:14) Adverse outcomes if AI bias is never addressed (22:50) How do "de-biasing" algorithms work? (27:42) Bias in Natural Language Generation (36:46) State of AI fairness research (38:22) Interventions needed? (43:18) What can individuals do to reduce model bias? (45:28) Publishing Fair Bytes (52:42) Rapid Fire Questions Links: https://arxiv.org/abs/2008.01548 (Defining and Evaluating Fair Natural Language Generation) https://arxiv.org/abs/1607.06520 (Man is to Computer Programmer as Woman is to Homemaker?) http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf (Gender Shades) https://arxiv.org/abs/2005.14165 (GPT-3 Paper: Language Models are Few Shot Learners) https://medium.com/fair-bytes/how-biased-is-gpt-3-5b2b91f1177 (How Biased is GPT-3?) https://medium.com/fair-bytes/reading-list-for-fairness-in-ai-topics-337e8606fd8d (Reading List for Fairness in AI Topics) https://towardsdatascience.com/machine-learnings-obsession-with-kids-tv-show-characters-728edfb43b3c (Machine Learning's Obsession with Kids' TV Show Characters)

The AI Experience
Episode 021: GPT-3

The AI Experience

Play Episode Listen Later Sep 18, 2020 23:40


In this episode, Lloyd discusses GPT-3, the latest development in Natural Language Generation from OpenAI, produced by a Machine Learning model trained on 175 billion parameters. Episode Guide: 0:59 - Intro to GPT-3 2:43 - Byte-Pair Encoded Tokens 3:49 - Conflicting Opinions & GPT-3 Capabilities 8:30 - The Nature of Understanding & Clever Hans 12:27 - Implications of GPT-3 19:43 - Weaponized NLG & Practical Concerns More Info: Visit us at aiexperience.org Brought to you by ICED(AI) Host - Lloyd Danzig

Growth Marketing Today
How iPullRank Increased Organic Search Traffic by 50% using Natural-Language Generation with Michael King (GMT106)

Growth Marketing Today

Play Episode Listen Later Sep 3, 2020 32:26


No website can stand without a strong backbone. For content, that backbone is technical SEO, which is the structure of your website. Without it, everything else falls apart. In this episode, I talk to Michael King, the Founder and Managing Director of iPullRank, a full-service SEO agency. He shares how he used Natural-Language Generation to automatically generate metadata and increase the organic search traffic for one of their clients by 50 percent. Get the free growth cheat sheet with all actionable tips and notes from this episode in a one-page PDF at https://growthtoday.fm/106

Recruiting Reimagined presented by Curious Thing
Yasaman Motazedi, Lead Data Scientist at Curious Thing, on AI in Recruiting

Recruiting Reimagined presented by Curious Thing

Play Episode Listen Later Jul 14, 2020 17:57


In this episode, I'm joined by Curious Thing's Yasaman Motazedi to talk about some of the science behind AI in recruiting.We learn about what led Yasaman to be interested in the field of natural language processing, the benefits for using AI for both employers and candidates, and what's next in the field of computational linguistics.Yasaman holds a PhD in computational linguistics and machine learning with her research strongly focuses on Natural Language Processing, Natural Language Understanding, and Natural Language Generation using various ML and statistical graphical models. She has built multiple advanced conversational Ai products in her career in large corporates and startups including Macquarie bank and MyAdvisor which was acquired by MYOB. As the lead data scientist at Curious Thing, she leads R&D endeavours.

AM Tampa Bay - 970 WFLA Podcasts
Sharon Daniels-How COVID-19 Is Affecting Your Community

AM Tampa Bay - 970 WFLA Podcasts

Play Episode Listen Later Jul 9, 2020 6:44


Sharon Daniels, Chief executive officer of Arria, which specializes in artificial intelligence known as Natural Language Generation, joined AM Tampa Bay to discuss how COVID-19 is affecting your community.

Boundless
EP37: Dr Paul Piwek, Senior Lecturer in Computing, Open University: Opening up minds with artificial intelligence

Boundless

Play Episode Listen Later May 22, 2020 12:43


This is a conversation with Dr Paul Piwek. Paul gained his PhD at the Institute for Perception Research with his thesis entitled Logic, Information & Conversation. Currently, he is an Associate Professor in computing at the Open University. In this episode, Paul explains the rationale behind a new AI project that has just received funding called ‘Opening Up Minds: Engaging Dialogue Generated From Argument Maps’. This project will use AI argue-bots to help people process, understand, reason with, argue about and ultimately, make better decisions with information. These bots will call on Natural Language Processing, Natural Language Generation and Dialogue Modelling and the project will also investigate the role of inference, reasoning and argumentation in communication.

Digital Marketing Legend Leaks
Dream Machine 1.0 is Not A Time Machine. Marketing Legend Srinidhi Ranganathan leaks the secret

Digital Marketing Legend Leaks

Play Episode Listen Later May 5, 2020 3:15


Tune in Now. “The Dream Machine” is something that sounds like a time-machine – a vehicle invented by the TIME TRAVELLER that allows him or her to travel through time. The Time Traveler may use it to remain in a fixed spatial position in relation to the Earth, but moves swiftly through time using control levers to control the rate of temporal acceleration, allowing him or her to see the world moving rapidly or slowly, similar to a video tape set to fast-foward and ultimately reaching to a period – whether past or the future. But, that idea is purely fictitious and experienced only in Hollywood movies.Taking some inspiration from that idea, Marketing Legend “Srinidhi Ranganathan” weaves his magic – yet again and this time has found gold. Dream Machine 1.0 is something like a tribute to Srinidhi's 8-9 years of research in the field of Artificial Intelligence and the course covers futuristic and innovative walkthroughs of never-before-heard tools that most never knew even existed – for that matter.Today's flurry of AI advancements wouldn't have been possible without the confluence of some mind-blowing factors that combined to create the right equation for AI growth: Natural Language Generation, Speech Recognition, Virtual Humans, Machine-Learning Tools, AI-Optimized Hardware, Decision Management, Deep Learning Platforms, Biometrics, Robotic Process Automation (RPA), Natural Language Processing (NLP), Mind-Cloning, Content Creation by Robots, Image Recognition and Digital Marketing Automation.Dream Machine 1.0 will help discover more and more futuristic innovative Artificial Intelligence (AI) technologies in these fields to propel your business and technology forward with the power of innovative AI.“The Dream Machine” is something that sounds like a time-machine – a vehicle invented by the TIME TRAVELLER that allows him or her to travel through time. The Time Traveler may use it to remain in a fixed spatial position in relation to the Earth, but moves swiftly through time using control levers to control the rate of temporal acceleration, allowing him or her to see the world moving rapidly or slowly, similar to a video tape set to fast-foward and ultimately reaching to a period – whether past or the future. But, that idea is purely fictitious and experienced only in Hollywood movies.Taking some inspiration from that idea, Marketing Legend “Srinidhi Ranganathan” weaves his magic – yet again and this time has found gold. Dream Machine 1.0 is something like a tribute to Srinidhi's 8-9 years of research in the field of Artificial Intelligence and the course covers futuristic and innovative walkthroughs of never-before-heard tools that most never knew even existed – for that matter.Today's flurry of AI advancements wouldn't have been possible without the confluence of some mind-blowing factors that combined to create the right equation for AI growth: Natural Language Generation, Speech Recognition, Virtual Humans, Machine-Learning Tools, AI-Optimized Hardware, Decision Management, Deep Learning Platforms, Biometrics, Robotic Process Automation (RPA), Natural Language Processing (NLP), Mind-Cloning, Content Creation by Robots, Image Recognition and Digital Marketing Automation. Dream Machine 1.0 will help discover more and more futuristic innovative Artificial Intelligence (AI) technologies in these fields to propel your business and technology forward with the power of innovative AI.Visit https://www.spreaker.com/show/digital-marketing-legend-leaks and listen to Legend's secretive podcast.Who is “Srinidhi Ranganathan”?Digital Marketing Legend "Srinidhi Ranganathan" is the Director of Digital Marketing at First Look Digital Marketing Solutions (India's First Artificial Intelligence Powered Digital Marketing company) located in Bangalore, India and is one of the top instructors in India who is teaching futuristic marketing-related courses on Udemy. He is a Technologist, Digital Marketing Coach, Author, and Video Creation Specialist with over 12+ years of experience and has worked at top companies in India. Having over 6 Lakh students on Udemy - he has facilitated digital marketing analysis and provided state-of-the-art marketing strategy ideas and tactical execution plans for top marketing companies in India including startups, SMB's and MNC's. This includes strategic brainstorming sessions, Artificial Intelligence-powered market analysis, market research related to digital performance, support of various AIDM marketing initiatives for new product and consumer promotional launches, etc.Srinidhi gained popularity through his unique, practical yet engaging training methodologies he utilizes to teach during the training sessions. Some of his training methods include gamified learning experiences conducted by virtual writing and teaching robots like "Aera 2.0" that prompts behavioural changes in students and bring forth a new kind of fascination among the crowd. These robots are virtual humans having super-intelligence capabilities. They can autonomously train anyone on topics ranging from ABC to Rocket Science, without human intervention.Srinidhi's passionate fans call him a "Digital Marketing Legend" and he's busy working on creating new virtual and humanoid robots to revolutionize education in India and the world.He is deemed to be an innovator in the field of Artificial Intelligence (AI) based Digital Marketing and is someone who has embraced many ideas and has created various environments in which team members are taught the required AI automation tools and resources to challenge the status quo, push boundaries and achieve super-extensive growth. His courses are a testament to where the future is actually heading.Legend's YouTube Channel:Full Free 11 Hours Digital Marketing Course - https://www.youtube.com/watch?v=-qJWb-Vtbhs&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=1Extract Billions of Email's (Email Marketing Course) - https://www.youtube.com/watch?v=WXLrWdf3f70&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=5&t=269sArtificial Intelligence in Digital Marketing (Full Guide for 2021) - https://www.youtube.com/watch?v=acSYPzX0gz8&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=4&t=1sCreate a Google-like Search Engine in Minutes - https://www.youtube.com/watch?v=kgWY4eVpHNo&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=9&t=86sEarn Money Blogging without writing a word in 2021 - https://www.youtube.com/watch?v=ZI6pY0YTHLc&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=6Mini-SEO Course 2021 - https://www.youtube.com/watch?v=PDPslvkQyPk&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=10&t=749sUltimate Graphic Design Course (Cloud Photo Editing of the Decade) - https://www.youtube.com/watch?v=jOZaRM2duFE&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=3WordPress CMS Full Course 2021 Edition - https://www.youtube.com/watch?v=HR8xRU1vus4&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=13&t=556sCreate a humanoid Robot Now (Secret Tutorial 2021) - https://www.youtube.com/watch?v=2hgljxLEezA&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=88&t=10sFree Mentorship in Artificial Intelligence (AI) Digital Marketing for 2021 - https://www.youtube.com/watch?v=ATuqwMU9kfs&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=11Social Media Extreme Hacks Secretive Tutorial 2021 - https://www.youtube.com/watch?v=K7_um4yfHkg&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=83Video Backlinks Course Tutorial to Skyrocket Viral Traffic - https://www.youtube.com/watch?v=RCN7a1oQKb8&list=PL7iVMyVUvAra2egy90W498ItgO3UUc-Tr&index=96

Mind Matters
Will AI Ever Write a Critically Acclaimed Novel?

Mind Matters

Play Episode Listen Later Apr 16, 2020 25:58


AI is starting to write and some of the copy reads quite well. Other writing is garbage. What’s the capability and limitation of writing by AI? Will AI ever write a novel that wins the Nobel Prize for Literature? Robert J. Marks and Dr. Selmer Bringsjord discuss creativity, artificial intelligence, and writing. Show Notes 00:37 | Introducing Selmer Bringsjord, Professor Read More › Source

Mind Matters
Will AI Ever Write a Critically Acclaimed Novel?

Mind Matters

Play Episode Listen Later Apr 16, 2020 25:58


AI is starting to write and some of the copy reads quite well. Other writing is garbage. What’s the capability and limitation of writing by AI? Will AI ever write a novel that wins the Nobel Prize for Literature? Robert J. Marks and Dr. Selmer Bringsjord discuss creativity, artificial intelligence, and writing. Show Notes 00:37 | Introducing Selmer Bringsjord, Professor… Source

Business & Technology Insights from Capgemini
Capgemini and Yseop talk about collaboration orchestrated by the Capgemini ScaleUp Qualification

Business & Technology Insights from Capgemini

Play Episode Listen Later Feb 11, 2020 10:40


Yseop is an international AI software company that uses Natural Language Generation (NLG) technology to enable enterprise level customer support, sales and reporting applications. Natural Language Generation utilizes advanced artificial intelligence to turn complex data sets into high quality written narratives quickly and at scale. NLG enables organizations automate manual data driven processes with application like complex report writing and sales automation. Yseop is one of the world leaders in NLG and has been qualified as an Intermediate ScaleUp by the Capgemini ScaleUp Qualification program. Let’s listen to Yseop’s experience with the Capgemini ScaleUp Qualification program. The Capgemini ScaleUp Qualification orchestrates the matching between the best ScaleUp and the specific needs that the client identifies. To know more: Check out the database of Qualified ScaleUps - https://scaleup-qualification.capgemini.com/scaleups Playlist of Capgemini ScaleUp Qualification Podcasts - bit.ly/2Mjd29V Fintech Co-Innovation with Capgemini’s ScaleUp Qualification - bit.ly/2Mn0KNI Suggest a ScaleUp to qualify - scaleup-qualification.capgemini.com/suggest…scaleup

The Georgian Impact Podcast | AI, ML & More
Episode 16: Using Natural Language Generation to Give Your Data a Voice

The Georgian Impact Podcast | AI, ML & More

Play Episode Listen Later Nov 25, 2019 28:12


Natural language generation (NLG) is a technology that allows companies to take vast quantities of data and turn them into compelling narratives that communicate the valuable insights they contain. In this episode, Jon Prial talks to Stuart Frankel, the CEO and Co-Founder of Narrative Science. Find out about the amazing advances in NLG technology and how this particular type of artificial intelligence is changing the way that many companies do business. You'll hear about: -Narrative Science and how it helps data rich companies become more efficient(1:03) -The difference between natural language processing and natural language generation (3:12) -Why NLG projects have to start with communication goals, not data (5:29) -The limits of data visualization (8:01) -Turning various types of data into narratives (9:59) -Why Narrative Science initially didn't call itself an AI company (16:15) -Leveraging other AI engines versus doing everything yourself (18:11) -Measuring success and getting better with AI solutions (20:08) -Approaching the many aspects of AI to create a successful strategy (21:29) -The maturity of the NLG industry (23:06) -Where CEOs should start with AI (24:15)

Somna med Henrik
NATURAL LANGUAGE GENERATION

Somna med Henrik

Play Episode Listen Later Jul 1, 2019 60:08


Mer ur AI-boken.

ai natural language generation
Marketing Over Coffee Marketing Podcast
Penn Nails Prognostication as Petition for Gillette to Part Patriots Place Permeates

Marketing Over Coffee Marketing Podcast

Play Episode Listen Later Mar 7, 2019


In this Marketing Over Coffee: Learn about Natural Language Generation and more! Direct Link to File Brought to you by our sponsors: ahrefs and LinkedIn GPT2 – Pre-trained model not released Tracking FB without a pixel 7:32 LinkedIn Marketing Solutions is how B2B Marketers and advertisers drive brand awareness and generate leads. It’s the only […] The post Penn Nails Prognostication as Petition for Gillette to Part Patriots Place Permeates appeared first on Marketing Over Coffee Marketing Podcast.

Digitaliseringspådden
Energi, Kryptovalutta og Robot-journalistikk

Digitaliseringspådden

Play Episode Listen Later Jan 13, 2019 53:15


Anders Brenna er redaktør i enerWE og har "rømt" fra Alcatraz. I dag sitter han i studio og forteller om hva som skjer med digitalisering i energibransjen. Episoden dekker oljebransjen, el-bransjen, solcellepaneler og Natural Language Generation.

Go Beyond Disruption
"Natural Language Generation: Turning Data into Insights" with Marc Zionts of Automated Insights (Durham, USA). GBD7

Go Beyond Disruption

Play Episode Listen Later Jul 4, 2018 18:17


"How do you feel about looking into a rear-view mirror that's three months old? How does that help you move your business along?" Marc Zionts, CEO of North Carolina-based Automated Insights, explains how natural language generation can help various industries go beyond business intelligence dashboards and automate data insights.     == MORE ABOUT OUR PODCAST ==    DON'T MISS OUT. Get the latest show every week, automatically and free, at https://www.aicpa-cima.com/disruption.html. Share it easily with colleagues and friends by using the icons on the media player.  TAKE IT FURTHER. Find related CPD/CPE resources at https://www.aicpastore.com/GoBeyondDisruption and https://www.cgmastore.com/GoBeyonddisruption. STAY CONNECTED. Follow #GoBeyondDisruption, @AICPANews and @CIMA_News on social."   ©2018 Association of International Certified Professional Accountants (AICPA & CIMA). All rights reserved 

Artificial Intelligence in Industry with Daniel Faggella
Robbie Allen from Automated Insights - The Use-Cases of Natural Language Generation

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Mar 11, 2018 31:03


Episode Summary: Machine learning (ML) can be used to identify objects and pictures or help steer vehicles, but is not best suited for text-based AI applications says Robbie Allen, founder of Automated Insights. In this episode of AI in Industry, we speak with Robbie about what is possible in generating text with AI and why rules based processes are a big part of natural language generation (NLG). We also explore which industries are likely to adopt such NLG techniques and in what ways can NLG help in business intelligence applications in the near future. You can listen to the full interview with Robbie here: https://www.techemergence.com/robbie-allen-from-automated-insights-the-use-cases-of-natural-language-generation  

ai ml use cases nlg natural language generation automated insights
AWS re:Invent 2017
ALX322: Natural Language Processing Plus Natural Language Generation: The Cutting Edge of Voice Design

AWS re:Invent 2017

Play Episode Listen Later Nov 30, 2017 39:09


Your Alexa skill could become the voice of your company to customers. How do you make sure that it conveys rich information, delivered with your brand's personality? In this session, Adam Long, VP of Product Management at Automated Insights, discusses natural language generation (NLG) techniques and how to make your Alexa response more insightful and engaging. Rob McCauley, Solutions Architect with Amazon Alexa, shows you how to put those techniques into action.

Data Podcast
Jen Underwood (@idigdata) - Natural Language Generation, NLG vs NLP, Automation Analytics

Data Podcast

Play Episode Listen Later Aug 18, 2017 26:21


Jen Underwood, founder of Impact Analytix, LLC, is a recognized analytics industry expert. She has a unique blend of product management, design and over 20 years of “hands-on” development of data warehouses, reporting, visualization and advanced analytics solutions. In addition to keeping a constant pulse on industry trends, she enjoys digging into oceans of data. Jen is honored to be an IBM Analytics Insider, SAS contributor, former Tableau Zen Master, and active analytics community member. In the past, Jen has held worldwide product management roles at Microsoft and served as a technical lead for system implementation firms. She has launched new analytics products and turned around failed projects. Today she provides industry thought leadership, advisory, strategy, and market research. She also writes for InformationWeek, O’Reilly Media and other tech industry publications. Jen has a Bachelor of Business Administration – Marketing, Cum Laude from the University of Wisconsin, Milwaukee and a post-graduate certificate in Computer Science – Data Mining from the University of California, San Diego. Interviewer: Rajib Bahar, Shabnam Khan - WSJ had an article on automation analytics recently. As if we don't have enough terms to keep track of such as descriptive analytics, predictive analytics, prescriptive analytics... What is the deal with automation analytics? Are they calling automatically scheduled jobs automation analytics? Or is this concept completely different? - According to Gartner, “By 2019, natural-language generation will be a standard feature of 90% of modern BI and analytics platforms.” NLG was also cited by Forbes in 2017 as a Top 10 Hot AI technology. What is natural-language generation? How does this subfield of AI differ from Natural language processing or NLP? - Recently, you released a white-paper on "Humanizing Enterprise Application Software with Natural Language". Would you like to share the lessons you have learned? - What major forces are currently driving demand for Advanced NLG? - How do Basic & Advanced NLG work? - Are there any benefits of embedding NLG into applications? - Is Quill by Narrative Science the only NLG product in this area? How does it compare to the competition? Please share it's pros and cons to other similar platform. - how do we connect with you in Twitter or other professional networking sites? Music: www.freesfx.co.uk

Artificial Intelligence in Industry with Daniel Faggella
Fundamentals of Natural Language Generation in Business Intelligence

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Mar 19, 2017 33:50


You might be aware that some of the articles online about sports or financial performance of companies are article written by machines; this machine learning-based technology is the burgeoning field of natural language generation (NLG), which aims to create written content as humans would—in context— but at greater speed and scale. Yseop is one such enterprise software company, whose product suite turns data into written insight, explanations, and narrative. In this episode we interview Yseop's Vice President Matthieu Rauscher, who talks about the fundamentals of natural language generation in business, and what conditions need to be in place in order to drive key objectives. Rauscher also addresses the difference between discover-oriented machine learning (ML) and production-level ML, and why different industries might be drawn to one over the other.

fundamentals ml business intelligence nlg natural language generation
O'Reilly Radar Podcast - O'Reilly Media Podcast
Kristian Hammond on truly democratizing data and the value of AI in the enterprise

O'Reilly Radar Podcast - O'Reilly Media Podcast

Play Episode Listen Later Nov 25, 2015 14:10


The O'Reilly Radar Podcast: Narrative Science's foray into proprietary business data and humanizing machines to bridge the data gap.O'Reilly's Mac Slocum chats with Kristian Hammond, Narrative Science's chief scientist. Hammond talks about Natural Language Generation, Narrative Science's shift into the world of business data, and evolving beyond the dashboard.Here are a few highlights: We're not telling people what the data are; we're telling people what has happened in the world through a view of that data. I don't care what the numbers are; I care about who are my best salespeople, where are my logistical bottlenecks. Quill can do that analysis and then tell you — not make you fight with it, but just tell you — and tell you in a way that is understandable and includes an explanation about why it believes this to be the case. Our focus is entirely, a little bit in media, but almost entirely in proprietary business data, and in particular we really focus on financial services right now. You can't make good on that promise [of what big data was supposed to do] unless you communicate it in the right way. People don't understand charts; they don't understand graphs; they don't understand lines on a page. They just don't. We can't be angry at them for being human. Instead we should actually have the machine do what it needs to do in order to fill that gap between what it knows and what people need to know. The point of the technology is to humanize the machine so we don't have to mechanize people. I always think it's a sad, sad state of the world where technologists keep demanding that everyone become data literate. What they mean is that everyone needs to have the analytical skills needed to look at a data set and figure out what's going on. I always see that as technologists saying, "We failed. We could not figure out how to explain to you what's going on, so you have to have our skills." While I think it's a noble notion that everybody has these skills, it's not going to happen. At the end of the day, it's not democratizing data to say we're going to do that. It's meritocratizing data. It's saying, "The only people who are allowed to understand what's happening in the world, based upon this data, are the people who have these high-end skills." It's incumbent upon us as technologists to move that data into information that is absolutely accessible to regular people. If we don't do it, we have failed. I think over the next two years, we're actually going to see a shift in the business attitude toward artificial intelligence. Right now, businesses are really struggling with, "What's going to be my AI or cognitive computing strategy?" That's going to shift into, "I have particular problems, are there particular AI systems that can solve these problems?" What we're going to get is a much more rational approach to the introduction of AI into the business world. It's not, "We need machine learning," it's, "We actually need to understand Churn." It's not that we need predictive analytics, it's that we actually need to know when our supply chains are going to break down. Subscribe to the O'Reilly Radar Podcast: Stitcher, TuneIn, iTunes, SoundCloud, RSS

O'Reilly Radar Podcast - O'Reilly Media Podcast
Kristian Hammond on truly democratizing data and the value of AI in the enterprise

O'Reilly Radar Podcast - O'Reilly Media Podcast

Play Episode Listen Later Nov 25, 2015 14:10


The O'Reilly Radar Podcast: Narrative Science's foray into proprietary business data and humanizing machines to bridge the data gap.O'Reilly's Mac Slocum chats with Kristian Hammond, Narrative Science's chief scientist. Hammond talks about Natural Language Generation, Narrative Science's shift into the world of business data, and evolving beyond the dashboard.Here are a few highlights: We're not telling people what the data are; we're telling people what has happened in the world through a view of that data. I don't care what the numbers are; I care about who are my best salespeople, where are my logistical bottlenecks. Quill can do that analysis and then tell you — not make you fight with it, but just tell you — and tell you in a way that is understandable and includes an explanation about why it believes this to be the case. Our focus is entirely, a little bit in media, but almost entirely in proprietary business data, and in particular we really focus on financial services right now. You can't make good on that promise [of what big data was supposed to do] unless you communicate it in the right way. People don't understand charts; they don't understand graphs; they don't understand lines on a page. They just don't. We can't be angry at them for being human. Instead we should actually have the machine do what it needs to do in order to fill that gap between what it knows and what people need to know. The point of the technology is to humanize the machine so we don't have to mechanize people. I always think it's a sad, sad state of the world where technologists keep demanding that everyone become data literate. What they mean is that everyone needs to have the analytical skills needed to look at a data set and figure out what's going on. I always see that as technologists saying, "We failed. We could not figure out how to explain to you what's going on, so you have to have our skills." While I think it's a noble notion that everybody has these skills, it's not going to happen. At the end of the day, it's not democratizing data to say we're going to do that. It's meritocratizing data. It's saying, "The only people who are allowed to understand what's happening in the world, based upon this data, are the people who have these high-end skills." It's incumbent upon us as technologists to move that data into information that is absolutely accessible to regular people. If we don't do it, we have failed. I think over the next two years, we're actually going to see a shift in the business attitude toward artificial intelligence. Right now, businesses are really struggling with, "What's going to be my AI or cognitive computing strategy?" That's going to shift into, "I have particular problems, are there particular AI systems that can solve these problems?" What we're going to get is a much more rational approach to the introduction of AI into the business world. It's not, "We need machine learning," it's, "We actually need to understand Churn." It's not that we need predictive analytics, it's that we actually need to know when our supply chains are going to break down. Subscribe to the O'Reilly Radar Podcast: Stitcher, TuneIn, iTunes, SoundCloud, RSS