Place in California, United States
POPULARITY
Join Dan Vega and DaShaun Carter for an insightful discussion with Sharat Chander, Senior Director of Java Product Management & Developer Relations at Oracle. With JavaOne 2025 on the horizon and Java's 30th anniversary approaching, Sharat shares his vision for Java's future and reflects on his two-decade journey in shaping the Java community. From his pivotal role in JavaOne to leading developer relations initiatives, discover how Java continues to evolve and adapt to modern development needs. Get an exclusive preview of what to expect at JavaOne 2025 in Redwood Shores and learn about the exciting features coming in Java 24.Topics DiscussedJava's 30th AnniversaryJava will be celebrating 30 years on May 23, 2025Discussion about Java's longevity despite being "declared dead" multiple timesThe Java Platform and CommunityJava as both a technology platform and a community platformValue of bi-directional relationships in community buildingThe importance of user groups (nearly 370 Java user groups worldwide)Importance of reciprocity in community engagementThe OpenJDK community as a demonstration of collaborative developmentUpcoming ConferencesDevNexusHappening in just over two weeks (17 days from recording)Longest running and largest third-party Java conference in North AmericaRun by the Atlanta Java User GroupCelebrating nearly 20 yearsJavaOneMarch 18-20, 2025 at Oracle's campus in Redwood ShoresWill feature six parallel tracksMany Java team members will be present, including: Mark Reinhold (Java architect)Brian Gets (Java language architect)Paul Sandos (Java AI strategy)Ron Pressler (Project Loom)Special discount code for Java User Group members valid through March 10thJDK 24 ReleaseScheduled for March 18, 2025 (coinciding with first day of JavaOne)Will include numerous JEPs (JDK Enhancement Proposals)Discussion about the preview feature process and gathering community feedbackOther Java Resources MentionedEssential Resources MentionedOpenJDK Community: openjdk.orgWhere Java is createdLocation for long-term investment projects (Panama, Valhalla, Amber, Loom)Place to participate in Java developmentDev Java: dev.javaLearning portal for all things JavaTutorials and community informationJava playground for testing featuresInside Java: inside.javaAggregation of publications from the Java teamEasy discovery of content by author and topicJava YouTube Channel: youtube.com/JavaEducational Videos for learning JavaNotable Quotes"Community means bi-directional information sharing that you stay connected on for the long term." - Shar"For me, community is about taking care of each other." - Shar"One of my favorite words that doesn't get often used, but we need to make it more of our vernacular is reciprocity." - SharNext StepsFollow the Java team at upcoming conferencesCheck out JDK 24 release on March 18Explore early access builds for Java 25Get involved with your local Java User Group
In this special episode of the Oracle University Podcast, Lois Houston and Nikita Abraham, along with Principal HCM Instructor Jeff Schuster, delve into the intersection of HCM and AI, exploring the practical applications and implications of this technology in human resources. Jeff shares his insights on bias and fairness, the importance of human involvement, and the need for explainability and transparency in AI systems. The discussion also covers the various AI features embedded in HCM and their impact on talent acquisition, performance management, and succession planning. Oracle AI in Fusion Cloud Human Capital Management: https://mylearn.oracle.com/ou/learning-path/oracle-ai-in-fusion-cloud-human-capital-management-hcm/136722 Oracle Fusion Cloud HCM: Dynamic Skills: https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-hcm-dynamic-skills/116654/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs here at Oracle University, and with me, is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week's conversation was all about Oracle Database 23ai backup and recovery, where we dove into instance recovery and effective recovery strategies. Today's episode is a really special one, isn't it, Lois? 00:53 Lois: It is, indeed, Niki. Of course, all of our AI episodes are special. But today, we have our friend and colleague Jeff Schuster with us. I think our listeners are really going to enjoy what Jeff has to share with us. Nikita: Yeah definitely! Jeff is a Principal HCM Instructor at Oracle University. He recently put together this really fantastic course on MyLearn, all about the intersection of HCM and AI, and that's what we want to pick his brain about today. Hi Jeff! We're so excited to have you here. 01:22 Jeff: Hey Niki! Hi Lois! I feel special already. Thanks you guys so much for having me. Nikita: You've had a couple of busy months, haven't you? 01:29 Jeff: I have! It's been a busy couple of months with live classes. I try and do one on AI in HCM at least once a month or so so that we can keep up with the latest/greatest stuff in that area. And I also got to spend a few days at Cloud World teaching a few live classes (about artificial intelligence in HCM, as a matter of fact) and meeting our customers and partners. So yeah, absolutely great week. A good time was had by me. 01:55 Lois: I'm sure. Cloud World is such a great experience. And just to clarify, do you think our customers and partners also had a good time, Jeff? It wasn't just you, right? Jeff: Haha! I don't think it was just me, Lois. But, you know, HCM is always a big deal, and now with all the embedded AI functionality, it really wasn't hard to find people who wanted to spend a little extra time talking about AI in the context of our HCM apps. So, there are more than 30 separate AI-powered features in HCM. AI features for candidates to find the right jobs; for hiring managers to find the right candidates; skills, talent, performance management, succession planning— all of it is there and it really covers everything across the Attract/Grow/Keep buckets of the things that HR professionals do for a living. So, anyway, yeah, lots to talk about with a lot of people! There's the functional part that people want to know about—what are these features and how do they work? But obviously, AI carries with it all this cultural significance these days. There's so much uncertainty that comes from this pace of development in that area. So in fact, my Cloud World talk always starts with this really silly intro that we put in place just to knock down that anxiety and get to the more practical, functional stuff. 03:11 Nikita: Ok, we're going to need to discuss the functional stuff, but I feel like we're getting a raw deal if we don't also get that silly intro. Lois: She makes a really good point. Jeff: Hahaha! Alright, fair enough. Ok, but you guys are gonna have to imagine I've got a microphone and a big room and a lot of echo. AI is everywhere. In your home. In your office. In your homie's home office. 03:39 Lois: I feel like I just watched the intro of a sci-fi movie. Jeff: Yeah. I'm not sure it's one I'd watch, but I think more importantly it's a good way to get into discussing some of the overarching things we need to know about AI and Oracle's approach before we dive into the specific features, so you know, those features will make more sense when we get there? 03:59 Nikita: What are these “overarching” things? Jeff: Well, the things we work on anytime we're touching AI at Oracle. So, you know, it starts with things like Bias and Fairness. We usually end up in a pretty great conversation about things like how we avoid bias on the front end by making sure we don't ingest things like bias-generating content, which is to say data that doesn't necessarily represent bias by itself, but could be misused. And that pretty naturally leads us into a talk about guardrails. Nikita: Guardrails? Jeff: Yeah, you can think of those as checkpoints. So, we've got rules about ingestion and bias. And if we check the output coming out of the LLM to ensure it complied with the bias and fairness rules, that's a guardrail. So, we do that. And we do it again on the apps side. And so that's to say, even though it's already been checked on the AI side, before we bring the output into the HCM app, it's checked again. So another guardrail. 04:58 Lois: How effective is that? The guardrails, and not taking in data that's flagged as bias-generating? Jeff: Well, I'll say this: It's both surprisingly good, and also nowhere near good enough. Lois: Ok, that's as clear as mud. You want to elaborate on that? Jeff: Haha! I think all it means is that approach does a great job, but our second point in the whole “standards” discussion is about the significance of having a human in the loop. Sometimes more than one, but the point here is that, particularly in HCM, where we're handling some really important and sensitive data, and we're introducing really powerful technology, the H in HCM gets even more important. So, throughout the HCM AI course, we talk about opportunities to have a human in the loop. And it's not just for reviewing things. It's about having the AI make suggestions, and not decisions, for example. And that's something we always have a human in the loop for all the time. In fact, when I started teaching AI for HCM, I always said that I like to think of it is as a great big brain, without any hands. 06:00 Nikita: So, we're not talking about replacing humans in HCM with AI. Jeff: No, but we're definitely talking about changing what the humans do and why it's more important than ever what the humans do. So, think of it this way, we can have our embedded AI generate this amazing content, or create really useful predictions, whatever it is that we need. We can use whatever tools we want to get there, but we can still expect people to ask us, “Where did that come from?” or “Does this account for [whatever]?”. So we still have to be able to answer that. So that's another thing we talk about as kind of an overarching important concept: Explainability and Transparency. 06:41 Nikita: I'm assuming that's the part about showing our work, right? Explaining what's being considered, how it's being processed, and what it is that you're getting back. Jeff: That's exactly it. So we like to have that discussion up front, even before we get to things like Gen and Non-Gen AI, because it's great context to have in mind when you start thinking about the technology. Whenever we're looking at the tech or the features, we're always thinking about whether people are appropriately involved, and whether people can understand the AI product as well as they need to. 07:11 Lois: You mentioned Gen and Non-Gen AI. I've also heard people use the term “Classic AI.” And lately, a lot more about RAG and Agents. When you're teaching the course, does everybody manage to keep all the terminology straight? Jeff: Yeah, people usually do a great job with this. I think the trick is, you have to know that you need to know it, if that makes sense. Lois: I think so, but why don't you spell it out for us. Jeff: Well, the temptation is sometimes to leave that stuff to the implementers or product developers, who we know need to have a deep understanding of all of that. But I think what we've learned is, especially because of all the functional implications, practitioners, product owners, everybody needs to know it too. If for no other reason so they can have more productive conversations with their implementers. You need to know that Classic or Non-Generative AI leverages machine learning, and that that's all you need in order to do some incredibly powerful things like predictions and matching. So in HCM, we're talking about things like predicting time to hire, identifying suggested candidates for job openings, finding candidates similar to ones you already like, suggesting career paths for employees, and finding recommended successors. All really powerful matching stuff. And all of that stuff uses machine learning and it's certainly AI, but none of that uses Generative AI to do that because it doesn't need to. 08:38 Nikita: So how does that fit in with all the hype we've been hearing for a long time now about Gen AI and how it's such a transformative technology that's going to be more impactful than anything else? Jeff: Yeah, and that can be true too. And this is what we really lean into when we do the AI in HCM course live. It's much more of a “right AI for the right job” kind of proposition. Lois: So, just like you wouldn't use a shovel to mix a cake. Use the right tool for the job. I think I've got it. So, the Classic AI is what's driving those kinds of features in HCM? The matching and recommendations? Jeff: Exactly right. And where we need generative content, that's where we add on the large language model capability. With LLMs, we get the ability to do natural language processing. So it makes sense that that's the technology we'd use for tasks like “write me a job description” or “write me performance development tips for my employee”. 09:33 Nikita: Ok, so how does that fit in with what Lois was asking about RAG and Agents? Is that something people care about, or need to? Jeff: I think it's easiest to think about those as the “what's next” pieces, at least as it relates to the embedded AI. They kind of deal with the inherent limitations of Gen and Non-Gen components. So, RAG, for example - I know you guys know, but your listeners might not...so what's RAG stand for? Lois & Nikita: Retrieval. Augmented. Generation. Jeff: Hahaha! Exactly. Obviously. But I think everything an HCM person needs to know about that is in the name. So for me, it's easiest to read that one backwards. Retrieval Augmented Generation. Well, the Generation just means it's more generative AI. Augmented means it's supplementing the existing AI. And Retrieval just tells you that that's how it's doing it. It's going out and fetching something it didn't already have in order to complete the operation. 10:31 Lois: And this helps with those limitations you mentioned? Nikita: Yeah, and what are they anyway? Jeff: I think an example most people are familiar with is that large language models are trained on this huge set of information. To a certain point. So that model is trained right up to the point where it stopped getting trained. So if you're talking about interacting with ChatGPT, as an example, it'll blow your doors off right up until you get to about October of 2023 and then, it just hasn't been trained on things after that. So, if you wanted to have a conversation about something that happened after that, it would need to go out and retrieve the information that it needed. For us in HCM, what that means is taking the large language model that you get with Oracle, and using retrieval to augment the AI generation for the things that the large language model wouldn't have had. 11:22 Nikita: So, things that happened after the model was trained? Company-specific data? What kind of augmenting are you talking about? Jeff: It's all of that. All those things happen and it's anything that might be useful, but it's outside the LLM's existing scope. So, let's do an example. Let's say you and Lois are in the market to hire someone. You're looking for a Junior Podcast Assistant. We'd like the AI in HCM to help, and in order to do that, it would be great if it could not just generate a generic job description for the posting, but it could really make it specific to Oracle. Even better, to Oracle University. So, you'd need the AI to know a few more things in order to make that happen. If it knows the job level, and the department, and the organization—already the job posting description gets a lot better. So what other things do you think it might need to know? 12:13 Lois: Umm I'm thinking…does it need to account for our previous hiring decisions? Can it inform that at all? Jeff: Yes! That's actually a key one. If the AI is aware not only of all the vacancies and all of the transactional stuff that goes along with it (like you know who posted it, what's its metadata, what business group it was in, and all that stuff)...but it also knows who we hired, that's huge. So if we put all that together, we can start doing the really cool stuff—like suggesting candidates based not only on their apparent match on skills and qualifications, but also based on folks that we've hired for similar positions. We know how long it took to make those hires from requisition open to the employee's first start date. So we can also do things like predicting time to hire for each vacancy we have with a lot more accuracy. So now all of a sudden, we're not just doing recruiting, but we have a system that accounts for “how we do it around here,” if that makes any sense. But the point is, it's the augmented data, it's that kind of training that we do throughout ingestion, going out to other sources for newer or better information, whatever it is we need. The ability to include it alongside everything that's already in the LLM, that's a huge deal. 13:31 Nikita: Ok, so I think the only one we didn't get to was Agents. Jeff: Yeah, so this one is maybe a little less relevant in HCM—for now anyway. But it's something to keep an eye on. Because remember earlier when I described our AI as having a great big brain but no hands? Lois: Yeah... Jeff: Well, agents are a way of giving it hands. At least for a very well-defined, limited set of purposes. So routine and repetitive tasks. And for obvious reasons, in the HCM space, that causes some concerns. You don't want, for example, your AI moving people forward in the recruiting process or changing their status to “not considered” all by itself. So going forward, this is going to be a balancing act. When we ask the same thing of the AI over and over again, there comes a point where it makes sense to kind of “save” that ask. When, for example, we get the “compare a candidate profile to a job vacancy” results and we got it working just right, we can create an agent. And just that one AI call that specializes in getting that analysis right. It does the analysis, it hands it back to the LLM, and when the human has had what they need to make sure they get what they need to make a decision out of it, you've got automation on one hand and human hands on the other...hand. 14:56 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like large language models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com to find out more. 15:26 Nikita: Welcome back! Jeff, you've mentioned the “Time to Hire” feature a few times? Is that a favorite with people who take your classes? Jeff: The recruiting folks definitely seem to enjoy it, but I think it's just a great example for a couple of reasons. First, it's really powerful non-generative AI. So it helps emphasize the point around the right AI for the right job. And if we're talking about things in chronological order, it's something that shows up really early in the hire-to-retire cycle. And, you know, just between us learning nerds, I like to use Time to Hire as an early example because it gets folks in the habit of working through some use cases. You don't really know if a feature is going to get you what you need until you've done some of that. So, for example, if I tell you that Time to Hire produces an estimated number of days to your first hire. And you're still Lois, and you're still Niki, and you're hiring for a Junior Podcast Assistant. So why do you care about time to hire? And I'm asking you for real—What would you do with that prediction if you had it? 16:29 Nikita: I guess I'd know how long it is before I can expect help to arrive, and I could plan my work accordingly. Jeff: Absolutely. What else. What could you do with a prediction for Time to Hire? Lois: Think about coverage? Jeff: Yeah! Exactly the word I was looking for. Say more about that. Lois: Well, if I know it's gonna be three months before our new assistant starts, I might be able to plan for some temporary coverage for that work. But if I had a prediction that said it's only going to be two weeks before a new hire could start, it probably wouldn't be worth arranging temporary coverage. Niki can hold things down for a couple of weeks. Jeff: See, I'm positive she could! That's absolutely perfect! And I think that's all you really need to have in terms of prerequisites to understand any of the AI features in HCM. When you know what you might want to do with it, like predicting the need for temp cover, and you've got everything we talked about in the foundation part of the course—the Gen and the Classic, all that stuff, you can look at a feature like Time to Hire and then you can probably pick that up in 30 seconds. 17:29 Nikita: Can we try it? Jeff: Sure! I mean, you know, we're not looking at screens for this conversation, but we can absolutely try it. You're a recruiter. If I tell you that Time to Hire is a feature that you run into on the job requisition and it shows you just a few editable fields, and then of course, the prediction of the number of days to hire—tell me how you think that feature is going to work when you get there. Lois: So, what are the fields? And does it matter? Jeff: Probably not really, but of course you can ask. So, let me tell you. Ready? The fields—they are these. Requisition Title, Location, and Education Level. Nikita: Ok, well, I have to assume that as I change those things… like from a Junior Podcast Assistant to a Senior Podcast Assistant, or change the location from Redwood Shores to Detroit, or change the required education, the time to hire is going to change, right? Jeff: 100%, exactly. And it does it in real time as you make those changes to those values. So when you pick a new location, you immediately get a new number of days, so it really is a useful tool. But how does it work? Well, we know it's using a few fields from the job requisition, but that's not enough. Besides those fields, what else would you need in order to make this prediction work? 18:43 Lois: The part where it translates to a number of days. So, this is based on our historic hiring data? How long it took us to hire a podcast assistant the last time? Jeff: Yep! And now you have everything you need. We call that “historic data from our company” bit “ingestion,” by the way. And there's always a really interesting discussion around that when it comes up in the course. But it's the process we use to bring in the HCM data to the AI so it can be considered or predictions exactly like this. Lois: So it's the HCM data making the AI smarter and more powerful. Nikita: And tailored. Jeff: Exactly, it's all of that. And obviously, the HCM is better because we've given it the AI. But the AI is also better because it has the HCM in it. But look, I was able to give you a quick description of Time to Hire, and you were able to tell me what it does, which data it uses, and how it works in just a few seconds. So, that's kind of the goal when we teach this stuff. It's getting everybody ready to be productive from moment #1 because what is it and how does it work stuff is already out of the way, you know? 19:52 Lois: I do know! Nikita: Can we try it with another one? Jeff: Sure! How about we do...Suggested Candidates. Lois: And you're going to tell us what we get on the screen, and we have to tell you how it works, right? Jeff: Yeah, yeah, exactly. Ok—Suggested Candidates. You're a recruiter or a hiring manager. You guys are still looking for your Junior Podcast Assistant. On the requisition, you've got a section called Suggested Candidates. And you see the candidate's name and some scores. Those scores are for profile match, skills match, experience match. And there's also an overall match score, and the highest rated people you notice are sorted to the top of the list. So, you with me so far? Lois: Yes! Jeff: So you already know that it's suggesting candidates. But if you care about explainability and transparency like we talked about at the start, then you also care about where these suggested candidates came from. So let's see if we can make progress against that. Let's think about those match scores. What would you need in order to come up with match scores like that? 20:54 Nikita: Tell me if I'm oversimplifying this, but everything about the job on the requisition, and everything about the candidate? Their skills and experience? Jeff: Yeah, that's actually simplified pretty perfectly. So in HCM, the candidate profile has their skills and experience, and the req profile has the req requirements. Lois: So we're comparing the elements of the job profile and the person/candidate profile. And they're weighted, I assume? Jeff: That's exactly how it works. See, 30 seconds and you guys are nailing these! In fairness, when we discuss these things in the course, we go into more detail. And I think it's helpful for HCM practitioners to know which data from the person and the job profiles is being considered (and sometimes just as important, which is not being considered). And don't forget we're also considering our ingested data. Our previously selected candidates. 21:45 Lois: Jeff, can I change the weighting? If I care more about skills than experience or education, can I adjust the weighting and have it re-sort the candidates? Jeff: Super important question. So let me give you the answer first, which is “no.” But because it's important, I want to tell you more. This is a discussion we have in the class around Oracle's Embedded vs. Custom AI. And they're both really important offerings. With Embedded, what we're talking about are the features that come in HCM like any other feature. They might have some enablement steps like profile options, and there's an activation panel. But essentially, that's it. There's no inspection panel for you to open up and start sticking your screwdriver in there and making changes. Believe it or not, that's a big advantage with Embedded AI, if you ask me anyway. Nikita: It's an advantage to not be able to configure it? Jeff: In this context, I think you can say that it is. You know, we talk about the advantages about the baked-in, Embedded AI in this course, but one of the key things is that it's pre-built and pre-tested. And the big one: that it's ready to use on day one. But one little change in a prompt can have a pretty big butterfly effect across all of your results. So, Oracle provides the Embedded AI because we know it works because we've already tested it, and it's, therefore, ready on day one. And I think that story maybe changes a little bit when you open up the inspection panel and bust out that screwdriver. Now you're signing up to be a test pilot. And that's just fundamentally different than “pre-built and ready on day one.” Not that it's bad to want configuration. 23:24 Lois: That's what the Custom AI path and OCI are about though, right? For when customers have hyper-specific needs outside of Oracle's business processes within the apps, or for when that kind of tuning is really required. And your AI for HCM course—that focuses on the Embedded AI instead of Custom, yes? Jeff: That is exactly it, yes. Nikita: You said there are about 30 of these AI features across HCM. So, when you teach the course, do you go through all of them or are there favorites? Ones that people want to spend more time on so you focus on those? Jeff: The professional part of me wants to tell you that we do try to cover all of them, because that explainability and transparency business we talked about at the beginning. That's for real, so I want our customers to have that for the whole scope. 24:12 Nikita: The professional part? What's the other part? Jeff: I guess that's the part that says sure, we need to hit all of them. But some of them are just inherently more fun to work on. So, it's usually the learners who drive that in the live classes when they get into something, that's where we spend the most time. So, I have my favorites too. The learners have their favorites. And we spend time where it's everybody's favorite. Lois: Like where? Jeff: Ok, so one is far from the most complex one, but I think it's really elegant in its simplicity. And it's the Celebrate feature, where we do employee recognition. There's an AI Assist available there. So when it's time to recognize a colleague, you just need to enter the headline or the title, and the AI takes it from there and just writes up the recognition. 24:56 Lois: What about that makes it a good example, Jeff? You said it's elegant. What do you mean? Jeff: I think it's a few things. So, start with the prompt. It's just the one line—just the headline. And that's your one input. So, type in the headline, get the recognition below. It's a great demonstration of not just the simplicity, but the power we get out of that simplicity. I always ask it to recognize my employees for implementing AI features in Oracle HCM, just to see what it comes up with. When it tells the employee that they're helping the company by automating routine tasks, bringing efficiency to the HR department, and then launches into specific examples of how AI features help in HCM, it really is pretty incredible. So, it's a simple demo, but it explains a lot about how the Gen AI works. Lois: That's really cool. 25:45 Nikita: So this one is generative AI. It's using the large language model to create the recognition based on the prompt, which is basically just whatever you entered in the headline. But how does that help explain how Gen AI works in HCM? Jeff: Well, let's take our simple prompt for example. There's a lot happening behind the scenes. It's taking our prompt, it's doing its LLM thing, but before it's done, it's creating the results in a very specific way. An employee recognition reads really differently than a job description. So, I usually describe this as the hidden part of our prompt. The visible part is what we typed. But it needs to know things like our desired output format. Make sure to use the person's name, summarize the benefits, and be sure to thank them for their contribution, that kind of stuff. So, those things are essentially hard-coded into the page. And that's to say, this is another area where we don't get an inspection panel that lets us go in and tweak the prompt. 26:42 Nikita: And that's generally how generative AI works? Jeff: Pretty much. Wherever you see an AI Assist button in HCM, that's more or less what's going on. And so when you get to some of the other more complex features, it's helpful to know that that is what's going on. Lois: Like where? Jeff: Well, it works that way for the About Me part of your employee profile, for goal creation in performance, and I think a really great example is in performance, where managers are providing the competency development tips. So the prompt there is a little more complex there because it involves the employee's proficiency rating instead of free text. But still, pretty straightforward. You're gonna click AI Assist and it's gonna generate all the development tips for any specific competency listed for that employee. Good development tips. Five of them. Nicely formatted with bullet points. And these aren't random words assembled by an AI. So they conform to best practices in the development of competencies. So, something is telling the LLM to give us results that are that good, in that particular way. So, it's just another good example of the work AI is doing while protected behind the inspection panel that doesn't exist. So, the coding of that page, in combination with what the LLM generates and the agent that it uses, is what produces the result. That's generally the approach. In the class, we always have a good time digging into what must be going on behind that inspection panel. Generally speaking, the better feel we have for what's going on on these pages, the better we're able to get the results we want, even without having that screwdriver out. 28:21 Nikita: So it's time well-spent, looking at all the individual features? Jeff: I think so, especially if you're anticipating really using any of them. So, the good news is, once you learn a few of them and how they work, and what they're best at, you stop being surprised after a while. But there are always tips and tricks. And like we talked about at the top, explainability and transparency are absolutely key. So, as much as I'm not a fan of the phrase, I do think this is kind of a “knowledge is power” kind of situation. 28:51 Nikita: Sadly, we're just about out of time for this episode. Lois: That's too bad, I was really enjoying this. Jeff, you were just talking about knowledge—where can we get more? Jeff: Well, like you mentioned at the start, check out the AI in HCM course on MyLearn. It's about an hour and a half, but it really is time well spent. And we get into detail on everything the three of us discussed here today, and then we have demoscussions of every feature where we show them and how they work and which data they're using and a whole bunch more. So, there's that. Plus, I hear the instructor is excellent. Lois: I can vouch for that! Jeff: Well, then you should definitely look into Dynamic Skills. Different instructor. But we have another course, and again I think about an hour and a half, but when you're done with the AI course, I always feel like Dynamic Skills is where you really wanna go next to really flesh out all the Talent Management ideas that got stirred up while you were having a great time in the AI course. And then finally, the live classes. It's always really fun to take live questions while we talk about AI in HCM. 29:54 Nikita: Thanks, Jeff! This has been really interesting. Lois: Yeah, thanks for being here, Jeff. We've loved having you on. Jeff: Thank you guys so much for having me. It's been a pleasure. Lois: If you want to learn more about what we discussed, go to the show notes for today's episode. You'll find links to the AI for Human Capital Management and Dynamic Skills courses that Jeff mentioned so you can check them out. You can also head over to mylearn.oracle.com to find the live sessions for MyLearn subscribers that Jeff conducts. Nikita: Join us next week as we kick off our “Best of 2024” season, where we'll be revisiting some of our most popular episodes of the year. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 30:35 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
What must be, and what should be included, by the trustee of a trust in an accounting? And, who is entitled to receive what kind of information? In this episode, our guest Denise Riley discusses the details of what is required in an accounting. About Our Guest: Denise Riley is the Senior Fiduciary Officer of the West Region in the San Francisco office of Northern Trust. In addition to being the fiduciary practice leader, she is responsible for the development and implementation of fiduciary strategies to serve the needs of Northern' s private clients. Denise has broad estate planning and fiduciary experience. Prior to her career at Northern Trust, she practiced law, specializing in tax, estate and charitable planning, estate and trust administration, and tax controversy, most recently at Orrick, Herrington and Sutcliffe. She also headed the Northern California fiduciary practice at U.S. Trust and served as the Chief Fiduciary Officer at Morgan Stanley Trust Company. She is a member of the State Bar of California and American Bar Association, where she has been active in several estate planning and taxation committees. She is past president of the San Francisco Area Women Tax Lawyers, and has written and spoken on a variety of tax and estate planning topics About Our Host: Erika Gasaway is a shareholder at Hopkins & Carley, a premier trust and estate planning, administration, and litigation firm in downtown San Jose and Redwood Shores, California. Erika represents clients in trust and estate disputes such as trust contests, will disputes, and enforcing trusts at all stages from pre-litigation counseling and negotiations all the way through trial or settlement. She is a member of the Trust and Estates Section of the California Lawyers Association and several other associations for estate panning and professional fiduciaries. She can be reached at egasaway@hopkinscarley.com or via Linked In. Thank you for listening to Trust Me!Trust Me is Produced by Foley Marra StudiosEdited by Todd Gajdusek
Ouh punaise... Pour ce nouveau Plouf & Pseudo, nous avons décidé de revenir sur Les Simpson: Le jeu (version 360 et PS3). Le fameux jeu en "open world" développé par Redwood Shores sous l'égide d'Electronic Arts au début des années 2000. Sorti dans le sillage du fameux Les Simpson: Le film, tout en en s'en détachant très nettement, Les Simpson: Le jeu a plus ou moins marqué les mémoires. Oui mais voilà, 15 ans plus tard, que reste-t-il d'un tel titre alors que la série animée Les Simpson poursuit son chemin sans que cela impacte qui que ce soit ? Et bien c'est ce que nous allons tenter de voir ensemble. Soutenez-nous sur Patreon avec des contreparties fort sympathiques! Suivez-mous sur Twitter La chaîne de Pseudo La chaîne de Plouf La chaîne Youtube du podcast pour la version vidéo
In this episode Ramesh shares his experience and perspectives related toHis interest in software and programming that was fueled by an IIT professor teaching programmingOne of the very few who did a software project as part of his degree and then joining Tata Burroughs from campus, as the first batchGetting back to academics to get his Master's degreeHow his extra learning at IISc and the work experience in the first couple of years helped him skip a few courses and jump into more advanced topics during the Masters'Joining HCL in their Singapore venture, Far East Computers LimitedHow his experience with facing customers helped in the assignment of setting up a QA environment at Oracle headquartersMoving to management roles in engineering managementJumping at the opportunity to start a development center in India from scratchThinking that he could do that for a year and then get back to the USAHow he had to take on various responsibilities of not just delivering, but also establishing processes, building networks for recruitment etcRealizing the need to take innovative approaches to attract the right talent, when competing with other companies that would hire in large numbersTaking up a teaching role at Anna University How that gave him an opportunity to observe the students over a semester, and not having to learn by interacting for 20 minutes in an interviewExtending this learning through internships Starting to love teachingThe trigger for his getting into writing [and authoring books], some of which are standard curriculum recommendations in about 50 universities across the world“Transferring knowledge to others will have a bigger impact on Society as a whole”Enjoying the role of a teacher at various institutionsHow some interests popped up by necessity, Slicing time every day for work, education and family - to make time for all Taking risks knowing that all options may not progress as desiredSome tips that would help selecting the right candidatesHow everyone should feel comfortable to disagree, but not be disagreeablehow , as a technical guy, you can have all the answers, but as a leader, you should have all the questionsHis career advice:Being passionate about what one doesDo not undervalue the importance of keeping your skills up to dateWhy you should become useless for your jobPlease bear with some audio distractions in the background in patches. You will love this story..Gopalaswamy Ramesh is an independent consultant, eminent author and has served as visiting / adjunct professor in several reputed universities and an acknowledged thought leader in the areas of global software project management, software testing and related areas as well as on soft skills. He has over three decades of experience in industry, consulting and teaching. He played a key role in establishing Oracle's India Development Center and grew it from zero to close to 500 people. His contributions to this phenomenal growth were acknowledged in Oracle International magazine When he left Oracle in 2002, he was their Senior Director, heading the engineering team at Bangalore.Before returning to India in 1994 to set up Oracle IDC, Ramesh worked in Oracle Headquarters in Redwood Shores, California, holding several technical and management positions. He also headed pre-sales and post-sales support of Oracle products in the ASEAN countries while working at Far East Computers, Singapore from 1982 till 1988. During this stint, he actively participated in launching and supporting Oracle in several countries in the region.Since 2002, Ramesh has been an independent consultant, offering consultancy services in the areas of global software project management, software testing and soft skills in India and abroad. His passion for teaching and sharing his vast knowledge and rich experience have resulted in his writing eight books have become best sellers and two of them being also translated to Chinese language. His books have been widely read and used as prescribed or reference books in a number of universities in India and abroad. The book Managing Global Software Projects won the National Award for the Best Book.As a thought leader, Ramesh has been an active invited participant in several well known conferences and workshops. He has played the roles of Program Chair, Technical Review Committee Head as well as an invited keynote presenter in several international conferences.Passionate about interacting with students and imparting relevant practical knowledge and training to students, Ramesh has been an Adjunct Professor in several leading universities and has taught in a number of very reputed institutions like IIIT-Bangalore, IIT Madras, Anna University, Chennai and Amrita School of Business. He was part of the Academic Advisory Board of Project Management Institute, India from 2009 to 2014, during which time he was the Technical Committee Chair for first two Research and Academic Conferences (RAC) on Project Management in India.Ramesh is passionate about inculcating the right value systems in school children and about people with special needs. He has also written books on moral education நிச்சயம் வெல்லலாம் நேரான பாதையில் in Tamil and Virtuous and Victorious – Always in English for school children. He has translated from English to Tamil the best-selling book GIFTED about the achievements of fifteen differently abled people. He and his wife, Lakshmi Ramesh who is a trained special educator are active participants in initiatives on people with special needs.Ramesh holds a B E Degree in Electronics and Communication Engineering from Indian Institute of Science, Bangalore, a M S Degree in Computer Science from Indian Institute of Technology, Madras (the thesis on Automatic Programming won the National Gold medal) and M S Degree in Engineering Management from Stanford University, California.Ramesh may be reached at gopalaswamy_ramesh@yahoo.com
For 17 years, Redwood Shores, California was home to an incredibly amazing theme park called Marine World/Africa U.S.A., a place I fondly remember going to when I was a kid. To see a live video of the recording of this episode, or to see a bonus video where I talk more about Marine World/Africa U.S.A., visit patreon.com/saturdaymorningmedia RESOURCES: https://bit.ly/3o7aa1D https://bit.ly/3hveZ2q FOLLOW GRANThttp://www.MrGrant.com http://www.twitter.com/toasterboyhttps://instagram.com/throwingtoasters/ ©2021 Saturday Morning Media/Grant Baciocco
In this episode of the CEO Insider, host Sandy McIlree and YTexas CEO Ed Curtis discuss Oracle Corporation. The multinational computer technology company was formerly headquartered in Redwood Shores, California until December 2020 when it moved its headquarters to Austin, Texas.
This week's episode is with Brendan Watkins from Redwood Shores. Brendan is from Ireland and made his way to the Bay Area after living in New York City. He spends his days sharply dressed at Stanford Children's Health as their Director of Enterprise Analytics, then spends his nights and mornings running. He is a sub-3 hour marathoner who participated in the Triple 8 Quest––8 marathons in 8 days on 7 continents plus New Zealand. In this episode, we discuss how Brendon started running, his quest to win a marathon, and his epic running and traveling adventures. Some highlights from our conversation: Balancing work and running at a high level Learning to love running on the streets on New York City Breaking 3 hours in the marathon in his late 30s What it was like running 8 marathons in 8 days on 7 continents plus New Zealand, especially for someone who is “not a good flyer” Winning a marathon on the last day of the Triple 8 Quest in Antarctica. How the Triple 8 Quest changed him How running helps him clear his mind Follow Brendan on Twitter @brendanwatkins and follow his training on Strava. You can read his blog about the Triple 8 Quest here https://brendanstriple8quest.blogspot.com/ Follow Runners of The Bay on Instagram and Twitter @Runnersofthebay. Check out our website runnersofthebay.com and send us an e-mail at runnersofthebay@gmail.com. Music is California by The Spectacular Fantastic.
Larry Pon has a tax and financial planning practice in Redwood Shores, CA. This is his 34th tax season and he has helped hundreds of people reach their financial goals through smart tax and financial planning. Larry regularly speaks to groups about tax and financial planning topics. Visit his website Larry Pon CPA
Do you know how to build your business? What is the future for the financial services industry? How does being focused on the client make you stand out? In this episode, Tony spoke with Bob Bingham. Bob is the co-founder of the wealth management firm, Bingham, Osborn and Scarborough. He founded the firm in 1985 with his Amherst College classmate Ed Osborn. BOS now manages close to $5 billion in assets, has nine partners, and approximately 50 employees in two Bay Area offices - its headquarters in San Francisco and its branch office in Redwood Shores. Connect with Bob Bingham:Bingham, Osborn and Scarborough home page: www.bosinvest.com
Do you know how to build your business? What is the future for the financial services industry? How does being focused on the client make you stand out? In this episode, Tony spoke with Bob Bingham. Bob is the co-founder of the wealth management firm, Bingham, Osborn and Scarborough. He founded the firm in 1985 with his Amherst College classmate Ed Osborn. BOS now manages close to $5 billion in assets, has nine partners, and approximately 50 employees in two Bay Area offices - its headquarters in San Francisco and its branch office in Redwood Shores. Connect with Bob Bingham:Bingham, Osborn and Scarborough home page: www.bosinvest.comSubscribe to the GET READY! With Tony Steuer PodcastBuzzsprout: http://www.buzzsprout.com/530449YouTube: https://www.youtube.com/channel/UCLFLB1LrzEHs0q6LXehCBqAAbout The GET READY! with Tony Steuer Podcast: On the GET READY! Podcast, I’ll be catching up with inspiring professionals from a variety of backgrounds and experiences to help bring you innovative strategies to organize your financial life. GET READY! With Tony Steuer will help you be informed and financially prepared today and in the future. We’ll also talk about best practices from both the consumer and industry perspectives.
DevOps is a hot topic, but is that heat driving adoption? Are organizations on the adoption path making headway in the cultural and technological changes necessary for DevOps success? A panel of DevOps experts discusses these and other issues in this freewheeling conversation. The Panelists Nicole Forsgen, Founder and CEO, DevOps Research and Assessment LLC, Beaverton, OR Leonid Igolnik, Product Development Executive, Startup Mentor and Advisor, Sand Hill Angels, San Francisco Bay Area. Alena Prokharchyk, Principal Software Engineer, Rancher Labs, Cupertino, CA Baruch Sadogursky, Developer Advocate, JFrog, Cupertino, CA Shay Shmeltzer, Director of Product Management, Oracle Cloud Development Tools, Redwood Shores, CA Kelly Shortridge, Product Manager at SecurityScorecard, NYC Coming Soon Combating Complexity An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures. AI Beyond ChatbotsHow is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Award-winning marketing expert Linda J. Popky, the founder and president of Redwood Shores-based Leverage2Market Associates, transforms organizations through powerful marketing performance. Her clients range from small businesses and consultants to mid-sized companies and large Fortune 500 enterprises. She’s been involved … Continue reading → The post ALM 150: Marketing Above the Noise – Linda Popky appeared first on ACT LOCAL.
Award-winning marketing expert Linda J. Popky, the founder and president of Redwood Shores-based Leverage2Market Associates, transforms organizations through powerful marketing performance. Her clients range from small businesses and consultants to mid-sized companies and large Fortune 500 enterprises. She’s been involved with many of the Silicon Valley companies who developed and deployed the technologies that have changed the world over the last twenty-five years, including Sun Microsystems, Cisco Systems, NetApp, PayPal, Plantronics, Autodesk, Applied Materials, and others. A consultant, speaker, and educator, Linda has been named one of the top women of influence in Silicon Valley and inducted into the Million Dollar Consultant® Hall of Fame. She is the past president of Women in Consulting and is a member of the Watermark Strategic Development Board. The first marketing expert worldwide certified to offer the Private Roster™ Mentoring Program for consultants and entrepreneurs, Linda has taught marketing at San Francisco State University’s College of Extended Learning, University of California Santa Cruz Extension in Silicon Valley, and West Virginia University’s Integrated Marketing Communications program. Linda holds an MBA and a BS in Communications from Boston University. Her new book Marketing Above the Noise: Achieve Strategic Advantage with Marketing that Matters¸ was released in March 2015. A classically trained pianist, Linda has also produced Night Songs, a CD of classical piano music.
When is managing a Cloud ecosystem like playing Sim City? A panel of Oracle cloud computing experts responds to this and other audience questions. Recorded live at OTN Architect Day, Redwood Shores, CA, July 9, 2013.
What's the best migration path to Oracle 12c that will accommodate an eventual move to the cloud? A panel of Oracle cloud computing experts responds to this and other audience questions. Recorded live at OTN Architect Day, Redwood Shores, CA, July 9, 2013.
A panel of Oracle cloud computing experts responds to audience questions. Recorded live at OTN Architect Day, Redwood Shores, CA, July 9, 2013.
Stanford and Wisconsin put on a show at Redwood Shores, California. Wisconsin’s Max Goff and Stanford’s Alex Osborne introduce "The Rower's Perspective" and we hear from coaches Steve Todd, Chris Clark and Craig Amerkhanian.Click the picture to watch. 357 MB, 32:32. Flash version.
Under excellent racing conditions, Washington rowed away from Cal to retain ownership of the Schoch Cup . Click the picture to watch or download the movie (to download on a PC, right-click and "save target as.." and on a Mac, ctrl-click and "Download Linked File". It's 326 MB in size, 34:42 in duration). Flash version.
For the first time in 18 years, Stanford defeats Cal at Redwood Shores to win the Schwabacher Cup. Click the picture to watch or download the movie (to download on a PC, right-click and "save target as.." and on a Mac, ctrl-click and "Download Linked File". It's 305 MB in size, 32:40 in duration). Flash version.
For the first time in 18 years, Stanford defeats Cal at Redwood Shores to win the Schwabacher Cup. Click the picture to watch or download the movie (to download on a PC, right-click and "save target as.." and on a Mac, ctrl-click and "Download Linked File". It's 305 MB in size, 32:40 in duration).
On this podcast, Joe and Roger are delighted to have a conversation with Bobby Yazdani, the Founder and Chairman of the Board of Saba, Inc. Bobby Yazdani is the driving force behind the Saba vision, products, and market direction. He founded Saba in 1997 and took the company public in 2000. Today, Saba, headquartered in Redwood Shores, Calif., is the leading provider of Human Capital Development and Management solutions. Among the Global 2000, Saba customers include Allianz Group, DaimlerChrysler, EDS, Procter & Gamble, Ford Motor Company, and Cisco Systems.Yazdani is a frequent spokesperson at industry events held by such prestigious organizations as the United Nations and The Economist. Prior to founding Saba, Yazdani served as Senior Director at Oracle Corporation. Yazdani has a BA in Applied Mathematics from the University of California at Berkeley. We know that you'll enjoy Cullinane & Green's latest in our series of conversations with CEOs. As always, we're on the bleeding edge so you don't have to be!