POPULARITY
Battery development is typically a lengthy and difficult process, but one company is proving that it doesn't have to be. Listen in as we sit down with Gavin White, Co-Founder and Chief Executive Officer, About:Energy, to discover how model-based design and advanced simulation are revolutionizing battery development — cutting timelines by up to 70%. You'll also learn about the creation of The Voltt database, a game-changing resource for engineers and manufacturers. From motorsports to satellites to microgrids in developing regions, this conversation is proof that batteries are shaping the future of energy, technology, and society. We'd love to hear from you. Share your comments, questions and ideas for future topics and guests to podcast@sae.org. Don't forget to take a moment to follow SAE Tomorrow Today—a podcast where we discuss emerging technology and trends in mobility with the leaders, innovators and strategists making it all happen—and give us a review on your preferred podcasting platform. Follow SAE on LinkedIn, Instagram, Facebook, X, and YouTube. Follow host Grayson Brulte on LinkedIn, X, and Instagram.
In this episode of the Energy Newsbeat Daily Standup - Weekly Recap, Stuart Turley and Michael Tanner break down why model-based oil forecasts consistently miss the mark, highlighting Irina Slav's takedown of flawed IEA predictions. They dive into Fed Chair Powell's Jackson Hole speech and its implications for oil and gas capital markets, LNG export-driven shale growth, the myth of peak Permian, and ERCOT's $14B clean energy project cancellations. From misguided net-zero assumptions to underreported system costs in renewables, this episode covers the real data behind energy trends and what investors should really be watching.Subscribe to Our Substack For Daily InsightsWant to Add Oil & Gas To Your Portfolio? Fill Out Our Oil & Gas Portfolio SurveyNeed Power For Your Data Center, Hospital, or Business?Follow Stuart On LinkedIn: https://www.linkedin.com/in/stuturley/ and Twitter: https://twitter.com/STUARTTURLEY16Follow Michael On LinkedIn: https://www.linkedin.com/in/michaelta... and Twitter: https://twitter.com/mtanner_1Timestamps:00:00 - Intro00:14 - What Does Powell's Comments in Jackson Hole Mean to the Oil and Gas Markets and Investors?03:40 - Surging US LNG Exports Fuel Growth in US Shale08:13 - ERCOT Project Cancellations Reached a Record in Q2 2025, and What is Next?10:43 - The True Cost of Renewable Energy and the Impact on Consumers' Electrical Bills13:33 - Why Model-Based Oil Forecasts Keep Missing the Mark18:13 - OutroLinks to articles discussed:What Does Powell's Comments in Jackson Hole Mean to the Oil and Gas Markets and Investors?Surging US LNG Exports Fuel Growth in US ShaleERCOT Project Cancellations Reached a Record in Q2 2025, and What is Next?The True Cost of Renewable Energy and the Impact on Consumers' Electrical BillsWhy Model-Based Oil Forecasts Keep Missing the Mark
In the second episode of our four-part podcast series with Siemens, Chiranjib Sengupta sat down with John Nixon, Vice President of Global Strategy for Energy, Chemicals and Infrastructure and Mike Houghton, Global Head of Sales, both at Siemens Digital Industries Software, to unpack the secrets of how energy companies can maximise their returns on digital investment. John and Mike highlighted emerging digital technologies and data-driven solutions that can help Chief Financial Officers (CFOs) better understand the impact of a company's financial pathway. They also explored the digital infrastructure that companies need to benefit from model-based financial optimisation, and addressed the skills and training component for CFOs and other stakeholders to start incorporating such optimisation in typical business plans and operations.
Prof Thomas Akam is a Neuroscientist at the Oxford University Department of Experimental Psychology. He is a Wellcome Career Development Fellow and Associate Professor at the University of Oxford, and leads the Cognitive Circuits research group.Featured ReferencesBrain Architecture for Adaptive BehaviourThomas Akam, RLDM 2025 TutorialAdditional ReferencesThomas Akam on Google ScholarpyPhotometry : Open source, Python based, fiber photometry data acquisition pyControl : Open source, Python based, behavioural experiment control.Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control, Nathaniel D Daw, Yael Niv, Peter Dayan, 2005Further analysis of the hippocampal amnesic syndrome: 14-year follow-up study of H. M., Milner, B., Corkin, S., & Teuber, H. L., 1968Internally generated cell assembly sequences in the rat hippocampus, Pastalkova E, Itskov V, Amarasingham A, Buzsáki G. Science. 2008Multi-disciplinary Conference on Reinforcement Learning and Decision 2025
Darshan H. Brahmbhatt, Podcast Editor of JACC: Advances, discusses a recently published original research paper on LLMonFHIR: A Physician-Validated, Large Language Model–Based Mobile Application for Querying Patient Electronic Health Data.
In this episode of Advanced Manufacturing Now, Editor David Muller interviews John McCullough of Kubotek Kosmos about the evolution of CAD technology, the challenges of digital communication in manufacturing, and the emerging trend of model-based definition. John shares insights on how A&D industries are driving innovation in data sharing, interoperability, and the potential impact of digital transformation on manufacturing processes.
In this episode, David Muller interviews Naveen Poonian, President and CEO of iBaseT, discussing the significance of model-based enterprise in manufacturing, particularly in the aerospace and defense sectors. They explore how iBaseT's software solutions simplify complex operations, enhance quality assurance, and drive operational efficiency.
This week model-based design and software defined vehicles take center stage! My guest Jim Tung (MathWorks fellow) and I discuss the trends driving a push toward software defined vehicles, the benefits of model-based design for SDV development and the tools that engineers should consider for virtualizing vehicle behavior. Also this week, I investigate a new enhanced event camera developed by a team of researchers at the University of Maryland which could vastly improve how robots see and react to the world around them.
Interview with William R. Small, MD, MBA, and Adam Szerencsy, DO, authors of Large Language Model–Based Responses to Patients' In-Basket Messages. Hosted by JAMA Associate Editor Angel N. Desai, MD, MPH. Related Content: Large Language Model–Based Responses to Patients' In-Basket Messages
What if creating a more ethical and transparent supply chain was as simple as following a five-step process? In today's episode I sit down with Aaron Lee, founder of Alchem Trading to discover his innovative CLEAN framework for chemical distribution. From sourcing bulk chemicals for water treatment to those used in cosmetics, Aaron discusses the complexities of global supply chains. Understand the pivotal role of reliability, regulatory adherence, and the ethical considerations of responsible consumption within the chemical industry. Aaron shares how smaller companies can carve out a niche by focusing on specialised chemistries often overlooked by larger players. Discover why positioning yourself as a problem-solver and maintaining visibility can make you indispensable to your clients. This is more than just selling a product; it's about building relationships and being the go-to expert when challenges arise. Tune in to learn how to make your mark in the ethical, transparent, and highly competitive world of chemical distribution .--------- EPISODE CHAPTERS --------- (0:00:00) - Ethical and Transparent Chemical Distribution (12 Minutes) This chapter explores the five fundamental steps of the clean framework for chemical distribution. We discuss the importance of clarifying customer needs, ranging from finding alternative sources to ensuring market competitiveness. (0:12:28) - Creating a Clean Chemical Framework (7 Minutes) Aaron shares the creation of the CLEAN framework for chemical distribution, designed to build reliable, ethical, and sustainable supply chains. Frustrated with the complexity often added by distributors, he developed this five-step process. ‘Clarify Locate, Evaluate, Agree, Nurture' This structured approach ensures efficiency and ethical practices in chemical distribution. (0:19:57) - Sales Strategies in Chemical Distribution (12 Minutes) We explore how niche markets and specialised chemistries can be more accessible to smaller companies, as larger distributors may overlook them due to their size. The conversation highlights the significance of providing reliable service, transparency, and leveraging a global network, rather than merely selling a product. Additionally, we discuss how positioning oneself as a problem-solver and offering valuable insights can attract clients, even if they don't initially need your services. The importance of relationship-building, being present at the right time, and maintaining visibility so clients think of you when issues arise. Follow Aaron LinkedIn: https://www.linkedin.com/in/aaron-lee-73823074/ Website: https://www.alchemtrading.com/ Follow me https://linktr.ee/fredcopestake Take the Collaborative Selling Scorecard https://collaborativeselling.scoreapp.com/ Watch this episode on YouTube https://youtu.be/EU52m4knyzA
In this episode of the BetterTech podcast about future trends and innovations, host Sophia Moshasha interviews Todd Kackley, VP and CIO at Textron. Todd shares insights into his journey in IT, starting from ERP consulting to leading Textron's technology initiatives. He discusses the importance of model-based enterprise and digital twins, highlighting how these technologies enhance product lifecycle management from design to manufacturing and sustainment. Todd explains Future Trends and Innovations, that how Textron uses augmented and virtual reality to improve both internal operations and customer engagement. He also addresses the challenges of integrating digital systems across ecosystems and emphasizes the need for aligning IT strategies with business objectives. Throughout the conversation, Todd underscores the value of people, process, and technology in driving innovation and competitive advantage at Textron. --- Send in a voice message: https://podcasters.spotify.com/pod/show/bettertech/message
President Joe Biden on Wednesday announced the cancellation of another $7.7 billion in student debt for 160,000 borrowers, bringing the total number of people to get their debt cancelled to 4.75 million, despite Republican opposition.U.S. existing home sales unexpectedly fell in April as higher mortgage rates and house prices weighed on demand, dealing another setback to the housing market.The Consumer Financial Protection Bureau will apply some credit card consumer protection rules to buy now, pay later (BNPL) lenders, in a bid to impose more oversight on the fast-growing sector.China is rolling out their own version of Chat-GPT. But content generated by this chatbot is based on the philosophies and writings of Chinese communist leader Xi Jinping.May is National Military Appreciation Month, and memorial day is upon us. For veterans transitioning from military to civilian life, which states among the 50 make it easier for military families to adjust and thrive financially? NTD spoke to Christie Matherne, editor from WalletHub about their latest ranking.
Step into the world of language model-based chatbots with our latest podcast episode! Join us for an in-depth exploration of the study titled "The Silence of the LLMs: Cross-Lingual Analysis of Political Bias and False Information Prevalence in ChatGPT, Google Bard, and Bing Chat." In this insightful episode, our host engages in a compelling interview with the researchers behind the study—Aleksandra Urman from the Department of Informatics at the University of Zurich (urman@ifi.uzh.ch) and Mykola Makhortykh from the Institute of Communication and Media Studies at the University of Bern (mykola.makhortykh@unibe.ch). Discover key findings from their groundbreaking research, offering a cross-lingual analysis of political bias and false information prevalence in large language model-based chatbots. Uncover the implications of their work on the trustworthiness of AI-driven chat systems. For further inquiries or to join the conversation, reach out to Aleksandra and Mykola via email. This episode provides a thought-provoking journey into the complexities of language models, political bias, and the prevalence of false information in the realm of contemporary chatbot technologies. Access the full study here: https://osf.io/q9v8f/download
Computer Aided Software Engineering (CASE) tools, which helped make the analysis, design, and implementation phases of software development better, faster, and cheaper, fell out of favor in the mid-'90s. Yet much of what they have to offer remains and is in active use within different Oracle tools. Listen to Lois Houston and Nikita Abraham interview Senior Principal OCI Instructor Joe Greenwald about the origins of CASE tools and model-based development, as well as how they evolved into their current forms. Develop Fusion Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/develop-fusion-applications-using-visual-builder-studio/122614/ Build Visual Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/build-visual-applications-using-oracle-visual-builder-studio/110035/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and joining me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we looked at Oracle's Redwood design system and how it helps create world-class apps and user experiences. Today, Joe Greenwald, our Senior Principal OCI Instructor, is back on our podcast. We're going to focus on where model-based development tools came from: their start as CASE tools, how they morphed into today's model-based development tools, and how these tools are currently used in Oracle software development to make developers' lives better. 01:08 Nikita: That's right. It's funny how things that fell out of favor years ago come back and are used to support our app development efforts today. Hi Joe! Joe: Haha! Hi Niki. Hi Lois. 01:18 Lois: Joe, how did you get started with CASE tools? Joe: I was first introduced to computer-aided software engineering tools, called CASE tools, in the late 1980s when I began working with them at Arthur Young consulting and then Knowledgeware corporation in Atlanta, helping customers improve and even automate their software development efforts using structured analysis and design techniques, which were popular and in high use at that time. But it was a pain to have to draw diagrams by hand, redraw them as specifications changed, and then try to maintain them to represent the changes in understanding what we were getting from our analysis and design phase work. CASE tools were used to help us draw the pictures as well as enforce rules and provide a repository so we could share what we were creating with other developers. I was immediately attracted to the idea of using diagrams and graphical images to represent requirements for computer systems. 02:08 Lois: Yeah, you're like me. You're a visual person. Joe: Yes, exactly. So, the idea that I could draw a picture and a computer could turn that into executable code was fascinating to me. Pictures helped us understand what the analysts told us the users wanted, and helped us communicate amongst the teams, and they also helped us validate our understanding with our users. This was a critical aspect because there was a fundamental cognitive disconnect between what the users told the analysts they needed, what the analysts told us the users needed, and what we understood was needed, and what the user actually wanted. There's a famous cartoon, you can probably find this on the web, that shows what the users wanted, what was delivered, and then all the iterations that the different teams go through trying to represent the simple original request. I started using entity relationship diagrams, data flow diagrams, and structure charts to support the structured analysis, design, and information engineering methods that we were using at the time for our clients. Used correctly, these were powerful tools that resulted in higher quality systems because it forced us to answer questions earlier on and not wait until later in the project life cycle, when it's more expensive and difficult to make changes. 03:16 Nikita: So, the idea was to try to get it wrong sooner. Joe: That's right, Niki. We wanted to get our analysis and designs in front of the customer as soon as possible to find out what was wrong with our models and then change the code as early in the life cycle as possible where it was both easier and, more importantly, cheaper to make changes before solidifying it in code. Of course, the key words here are “used correctly,” right? I saw the tools misused many times by those who weren't trained properly or, more typically, by those whose software development methodology, if there even was one, didn't use the tools properly—and of course the tools took the blame. CASE tools at the time held a lot of promise, but one could say vendors were overpromising and under delivering, although I did have a number of clients who were successful with them and could get useful support for their software development life cycle from the use of the tools. Since then, I've been very interested in using tools to make it easier for us to build software. 04:09 Nikita: So, let me ask you Joe, what is your definition of a CASE tool? Joe: I'm glad you asked, Niki, because I think many people have many different definitions. I'm precise about it, and maybe even a bit pedantic with the definition. The definition I use for a CASE tool comprises four things. One, it uses graphics, graphical symbols, and diagrams to represent requirements and business rules for the application. Two, there is a repository, either private, or shared, or both, of models, definitions, objects, requirements, rules, diagrams, and other assets that can be shared, reused, and almost more importantly, tracked. Three, there's a rule-base that prevents you from drawing things that can't be implemented. For example, Visio was widely regarded as a CASE tool, but it really wasn't because it had no rules behind it. You could wire together anything you wanted, but that didn't mean it could be built. Fourth, it generates useful code, and it should do two-way engineering, where code, typically code changed outside the model, can be reverse engineered back into the model and apply updates to the model, and to keep the model and the source code in synchronization. 05:13 Joe: I came up with a good slogan for CASE tools years ago: a good CASE tool should automate the tedious, manual portions of software development. I'd add that one also needs to be smarter than the tools they're using. Which reminds me, interestingly enough, of clients who would pick up CASE tools, thinking that they would make their software development life cycle shorter. But if they weren't already building models for analysis or design, then automating the building of things that they weren't building already was not going to save them time and effort. And some people adopted CASE tools because they were told to or worse, forced to, or they read an article on an airplane, or had a Eureka moment, and they would try to get their entire software development staff to use this new tool, overnight literally, in some cases. Absolutely sheer madness! Tools like this need to be brought into the enterprise in a slow, measured fashion with a pilot project and build upon small successes until people start demanding to use the tools in their own projects once they see the value. And each group, each team would use the CASE tool differently and to a different degree. One size most definitely does not fit all and identifying what the teams' needs are and how the tool can automate and support those needs is an important aspect of adopting a CASE tool. It's funny, almost everyone would agree there's value in creating models and, eventually, generating code from them to get better systems and it should be faster and cheaper, etc. But CASE tools never really penetrated the market more than maybe about 18 to 25%, tops. 06:39 Lois: Huh, why? Why do you think CASE tools were not widely accepted and used? Joe: Well, I don't think it was an issue with the tools so much as it was with a company's software development life cycle, and the culture and politics in the company. And I imagine you're shocked to hear that. Ideally, switching to or adopting automated tools like CASE tools would reduce development time and costs, and improve quality. So it should increase reusability too. But increasing the reusability of code elements and software assets is difficult and requires discipline, commitment, and investment. Also, there can be a significant amount of training required to teach developers, analysts, project managers, and senior managers how to deal with these different forms of life cycles and artifacts: how they get created, how to manage them, and how to use them. When you have project managers or senior managers asking where's the code and you try telling them, “Well, it's gonna take a little while. We're building models and will press the button to generate the code.” That's tough. And that's also another myth. It was never a matter of build all the models, press the button, generate all the code, and be done. It's a very iterative process. 07:40 Joe: I've also found that developers find it very psychologically reinforcing to type code into the keyboard, see it appear on the screen, see it execute, and models were not quite as satisfying in the same way. There was kind of a disconnect. And are still not today. Coders like to code. So using CASE tools and the discipline that went along with them often created issues for customers because it could shine a bright light on the, well let's say, less positive aspects of their existing software development process. And what was seen often wasn't pretty. I had several clients who stopped using CASE tools because it made their poor development process highly visible and harder to ignore. It was actually easier for them to abandon the CASE tools and the benefits of CASE tools than to change their internal processes and culture. CASE tools require discipline, planning, preparation, and thoughtful approaches, and some places just couldn't or wouldn't do that. Now, for those who did have discipline and good software development practices, CASE tools helped them quite a bit—by creating documentation and automating the niggly little manual tasks they were spending a lot of time on. 08:43 Nikita: You've mentioned in the past that CASE tools are still around today, but we don't call them that. Have they morphed into something else? And if so, what? Joe: Ah, so the term Computer Aided Software Engineering morphed into something more acceptable in the ‘90s as vendors overpromised and under-delivered, because many people still saw value and do today see value in creating models to help with understanding, and even automating some aspects of software code development. The term model-based development arose with the idea that you could build small models of what you want to develop and then use that to guide and help with manual code development. And frankly just not using the word CASE was a benefit. “Oh we're not doing CASE tools, but we'll still build pictures and do stuff.” So, it could be automated and generate useful code as well as documentation. And this was both easy to use and easier to manage, and I think the industry and the tools themselves were maturing. 09:35 Joe: So, model-based development took off and the idea of building a model to represent your understanding of the system became popular. And it was funny because people were saying that these were not CASE tools, this was something different, oh for sure, when of course it was pretty much the same thing: rule-based graphical modeling with a repository that created and read code—just named differently. And as I go through this, it reminds me of an interesting anecdote that's given about US President Abraham Lincoln. He once asked someone, “If you call a dog's tail a leg, how many legs does a dog have?” Now, while you're thinking about that, I'll go ahead and give you the correct answer. It's four. You can call a dog's tail anything you want, but it still has four legs. You can call your tools whatever you want, but you still have the idea of building graphical representations of requirements based on rules, and generating code and engineering in both directions. 10:29 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit mylearn.oracle.com to get started. 10:58 Nikita: Welcome back! Joe, how did you come to Oracle and its CASE tools? Joe: I joined Oracle in 1992 teaching the Oracle CASE tool Designer. It was focused on structured analysis and design, and could generate database Data Definition Language (DDL) for creating databases. And it was quite good at it and could reverse engineer databases as well. And it could generate Oracle Forms and Reports – character mode at first, and then GUI. But it was in the early days of the tool and there was definitely room for improvement, or as we would say opportunities for enhancement, and it could be hard to learn and work with. It didn't do round-trip engineering of reading Oracle Forms code and updating the Designer models, though some of that came later. So now you had an issue where you could generate an application as a starting point, but then you had to go in and modify the code, and the code would get updated, but the models wouldn't get updated and so little by little they'd go out of sync with the code, and it just became a big mess. But a lot of people saw that you could develop parts of the application and data definition in models and save time, and that led to what we call model-based development, where we use models for some aspects but not all. We use models where it makes sense and hand code where we need to code. 12:04 Lois: Right, so the two can coexist. Joe, how have model-based development tools been used at Oracle? Are they still in use? Joe: Absolutely! And I'll start with my favorite CASE tool at Oracle, uhm excuse me, model-based development tool. Oracle SOA Suite is my idea of a what a model-based development tool should be. We create graphical diagrams to represent the flow of messages and message processing in web services—both SOAP and REST interfaces—with logic handled by other diagrammers and models. We have models for logic, human interaction, and rules processing. All this is captured in XML metadata and displayed as nice, colored diagrams that are converted to source code once deployed to the server. The reason I like it so much is Oracle SOA Suite addressed a fundamental problem and weakness in using modeling tools that generated code. It doesn't let the developer touch the generated code. I worked with many different CASE tools over the years, and they all suffered from a fundamental flaw. Analysts and developers would create the models, generate the code, eventually put it into production, and then, if there was a bug in the code, the developer would fix the code rather than change the model. For example, if a bug was found at 10:30 at night, people would get dragged out of bed to come down and fix things. What they should have done is update the model and then generate the new code. But late at night or in a crunch, who's going to do that, right? They would fix the code and say they'd go back and update the model tomorrow. But as we know, tomorrow never comes, and so little by little, the model goes out of synchronization with the actual source code, and eventually people just stopped doing models. 13:33 Joe: And this just happened more and more until the use of CASE tools started diminishing—why would I build a model and have to maintain it to just maintain the code? Why do two separate things? Time is too valuable. So, the problem of creating models and generating code, and then maintaining the code and not the model was a problem in the industry. And I think it certainly hurt the adoption and progress of CASE tool adoption. This is one of the reasons why Oracle SOA Suite is my favorite CASE tool…because you never have access to the actual generated code. You are forced to change the model to get the new code deployed. Period. Problem solved. Well, SOA Suite does allow post- deployment changes, of course, and that can introduce consistency issues and while they're easier to handle, we still have them! So even there, there's an issue. 14:15 Nikita: How and where are modeling tools used in current Oracle software development applications? Joe: While the use of CASE tools and even the name CASE fell out of favor in the early to mid-90s, the idea of using graphical diagrams to capture requirements and generate useful code does live on through to today. If you know what to look for, you can see elements of model-based design throughout all the Oracle tools. Oracle tools successfully use diagrams, rules, and code generation, but only in certain areas where it clearly makes sense and in well-defined boundaries. Let's start with the software development environment that I work with most often, which is Visual Builder Studio. Its design environment uses a modeling tool to model relationships between Business Objects, which is customer data that can have parent-child relationships, and represent and store customer data in tables. It uses a form of entity relationship diagram with cardinality – meaning how many of these are related to how many of those – to model parent-child relationships, including processing requirements like deleting children if a parent is deleted. The Business Object diagrammer displays your business objects and their relationships, and even lets you create new relationships, modify the business objects, and even create new business objects. You can do all your work in the diagram and the correct code is generated. And you can display the diagram for the code that you created by hand. And the two stay in sync. There's also a diagramming tool to design the page and page flow navigation between the pages in the web application itself. You can work in code or you can work in the diagram (either one or both), and both are updated at the same time. Visual Builder Studio uses a lot of two-way design and engineering. 15:48 Joe: Visual Builder Studio Page Designer allows you to work in code if you want to write HTML, JavaScript, and JSON code, or work in Design mode and drag and drop components onto the page designer canvas, set properties, and both update each other. It's very well done. Very well integrated. Now, oddly enough, even though I am a model-based developer, I find I do most of my work in Visual Builder Studio Designer in the text-based interface because it's so easy to use. I use the diagrammers to document and share my work, and communicate with other team members and customers. While I think it's not being used quite so much anymore, Oracle's JDeveloper and application development framework, ADF, includes built-in tools for doing Unified Modeling Language (UML) modeling. You can create object-oriented class models, generate Java code, reverse engineer Java code, and it updates the model for you. You can also generate the code for mapping Java objects to relational tables. And this has been the heart of data access for ADF Business Components (ADFBC), which is the data layer of Oracle Fusion Apps, for 20 years, although that is being replaced these days. 16:51 Lois: So, these are application development tools for crafting web applications. But do we have any tools like this for the database? Joe: Yes, Lois. We do. Another Oracle tool that uses model-based development functionality is the OCI automated database actions. Here you can define tables, columns, and keys. You can also REST-enable your tables, procedures, and functions. Oracle SQL Developer for the web is included with OCI or Oracle SQL Developer on the desktop has a robust and comprehensive data modeler that allows you to do full blown entity relationship diagramming and generate code that can be implemented through execution in the database. Now that's actually the desktop version that has the full-blown diagrammer but you also have some of that in the OCI database actions as well. But the desktop version goes further than that. You can reverse engineer the existing database, generate models from it, modify the models, and then generate the delta, the difference code, to allow you to update an existing database structure based on the change in the model. It is very powerful and highly sophisticated, and I do strongly recommend looking at it. And Oracle's APEX (Application Express) has SQL workshop, where you can see a graphic representation of the tables and the relationships between the tables, and even build SQL statements graphically. 18:05 Nikita: It's time for us to wrap up today but I think it's safe to say that model-based development tools are still with us. Any final thoughts, Joe? Joe: Well, actually today I wonder why more people don't model. I've been on multiple projects and worked with multiple clients where there's no graphical modeling whatsoever—not even a diagram of the database design and the relationships between tables and foreign keys. And I just don't understand that. One thing I don't see very much in current CASE or model-based tools is enabling impact analysis. This is another thing I don't see a lot. I've learned, in many years of working with these tools, to appreciate performing impact analysis. Meaning if I make a change to this thing here, how many other places are going to be impacted? How many other changes am I going to have to make? Something like Visual Builder Studio Designer is very good at this. If you make a change to the spelling of a variable let's say in one place, it'll change everywhere that it is referenced and used. And you can do a Find in files to find every place something is used, but it's still not quite going the full hundred percent and allowing me to do a cross-application impact analysis. If I want to change this one thing here, how many other things will be impacted across applications? But it's a start. And I will say in talking to the Visual Builder Studio Architect, he understands the value of impact analysis. We'll see where the tool goes in the future. And this is not a commitment of future direction, of course. It would appear the next step is using AI to listen to our needs and generate the necessary code from it, maybe potentially bypassing models entirely or creating models as a by-product to aid in communication and understanding. We know a picture's worth a 1000 words and it's as true today as it's ever been, and I don't see that going away anytime soon. 19:41 Lois: Thanks a lot, Joe! It's been so nice to hear about your journey and learn about the history of CASE tools, where they started and where they are now. Joe: Thanks Lois and Niki. Nikita: Join us next week for our final episode of this series on building the next generation of Oracle Cloud Apps with Visual Builder Studio. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 20:03 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Denoising diffusion models have emerged as a powerful tool for various image generation and editing tasks, facilitating the synthesis of visual content in an unconditional or input-conditional manner. The core idea behind them is learning to reverse the process of gradually adding noise to images, allowing them to generate high-quality samples from a complex distribution. In this survey, we provide an exhaustive overview of existing methods using diffusion models for image editing, covering both theoretical and practical aspects in the field. We delve into a thorough analysis and categorization of these works from multiple perspectives, including learning strategies, user-input conditions, and the array of specific editing tasks that can be accomplished. In addition, we pay special attention to image inpainting and outpainting, and explore both earlier traditional context-driven and current multimodal conditional methods, offering a comprehensive analysis of their methodologies. To further evaluate the performance of text-guided image editing algorithms, we propose a systematic benchmark, EditEval, featuring an innovative metric, LMM Score. Finally, we address current limitations and envision some potential directions for future research. The accompanying repository is released at https://github.com/SiatMMLab/Awesome-Diffusion-Model-Based-Image-Editing-Methods. 2024: Yi Huang, Jiancheng Huang, Yifan Liu, Mingfu Yan, Jiaxi Lv, Jianzhuang Liu, Wei Xiong, He Zhang, Shifeng Chen, Liangliang Cao https://arxiv.org/pdf/2402.17525v1.pdf
Schneider Electric Announces Evolution of EcoStruxure IT with Model Based, Automated Sustainability Metric Reporting New features offer enhanced visibility of energy and resource consumption, historical data analysis and detailed metrics to help organisations meet imminent regulatory reporting requirements. Includes a fast, intuitive, and simple-to-use reporting engine with third-party integration and data export features, all at the touch of a button. Are the result of three years of strategic investment, and rigorous testing and development as part of Schneider Electric's CIO-led Green IT Program. Sustainability Metric Reporting from Schneider Electric Schneider Electric, the leader in digital transformation of energy management and automation, today announced the introduction of new model based, automated sustainability reporting features within its award-winning EcoStruxure IT data centre infrastructure management (DCIM) software. The release follows three years of strategic investment, and rigorous testing and development as part of Schneider Electric's Green IT Program, led by Schneider Electric's Chief Information Officer Elizabeth Hackenson. Available to all EcoStruxure IT users starting in April, the new and enhanced reporting features combine 20 years of sustainability, regulatory, data centre and software development expertise with advanced machine learning. Customers will have access to a new set of reporting capabilities, which traditionally had required a deep understanding of manual data calculation methods. Unlike anything available in the market, the new model offers customers a fast, intuitive, and simple-to-use reporting engine to help meet imminent regulatory requirements, including the European Energy Efficiency Directive (EED). In fact, the new capabilities go far-beyond the EED-required metrics, ensuring customers can measure their data centres' real-time and historical energy performance data against all of the advanced reporting metrics specified within Schneider Electric's White Paper 67. EcoStruxure IT software enables owners and operators to measure and report data centre performance based on historical data and trends analysis, combining it with artificial intelligence (AI) and real-time monitoring to turn it into actionable insights for improved sustainability. With the new download function, organisations can quickly quantify and report, at the click of a button - removing laborious manual tasks and making it faster and easier to harness the power of data to reduce the environmental impact of their data centres. Key benefits include: Calculate and track PUE per site/room over time with CEN/CENLEC 50600-4-2 methodology. Leverage data analytic models and cloud-based data lake to simplify reporting of PUE. Report current power consumption per site room and report against historical trends. Utilise "click of a button" reporting for regulations. Witness trending over time for various data centres and distributed IT environments. Empower customers to securely access and manipulate their data in their preferred tool via third-party integration and data export. "At Schneider Electric, we recognise that sustainability is a journey, and for the last three years, we've increased our investment to develop new software features that make it faster and simpler for our customers to operate resilient, secure and sustainable IT infrastructure," said Kevin Brown, Senior Vice President, EcoStruxure IT, Schneider Electric. "The new reporting capabilities included with EcoStruxure IT have been tested and adopted by our own organisation, and will allow customers to turn complex data into meaningful information, and report on key sustainability metrics." A new era for Green IT In 2021, Schneider Electric released its Schneider Sustainability Impact (SSIs), publicising the company's sustainability commitments. Aligning with the SSI purpose, Schneider Electric's CIO Elizabeth Hackenson kickstarted the company's Green ...
Are you interested in service design as creative problem solving? Summary of the article titled Problem-solving design-platform model based on the methodological distinctiveness of service design from 2019 by Youngok Jeon, published in the Journal of Open Innovation: Technology, Market, and Complexity. This is a great preparation to our next interview with Talia Radywyl in episode 198 talking about service design processes. Since we are investigating the future of cities, I thought it would be interesting to see how wicked urban problems can be solved with service design. This article defines the meaning and core properties of service design, and proposes a six-step service design process model based on the interrelationships among these properties. As the most important things, I would like to highlight 3 aspects: Service design focuses on developing human-centered solutions to complex problems across sectors like urban development and healthcare, emphasizing the enhancement of user experience. Service design integrates goods and services to meet customer needs sustainably, shifting from goods-dominant to service-dominant logic that values experiential over tangible offerings. The approach employs design thinking and participatory design to involve stakeholders in creating solutions, aiming to bridge the delivery gap between expected and actual service quality. You can find the article through this link. Abstract: This study explores the differentiated properties of service design in the context of the final value pursued by this methodology, avoiding the interpretation of pending issues to which service design is applied. First, the following were identified as the core properties of service design, differentiated from other design methodologies: “Design Thinking”, a creative problem-solving process; “User Experience Value”, the pursued goal; “Participatory Design”, a practical research methodology; and “Interaction between Users and Providers”, the core research scope of pending issues. Second, the study proposed a six-step service design process model based on the interrelationships between these properties. The “problem recognition” step identified a decline in the quality of user experiences and forms a self-awareness of dissatisfaction. Next, the “problem understanding” step conducts multidisciplinary cooperative research on dissatisfaction. Subsequently, the “problem deduction” step determines users' unsatisfied desires through visualization of the core pending issues, and the “problem definition” step performs creative conception activities with problem-solving approaches for the unsatisfied desires. Further, the “problem-solving” step develops service design models, and finally, the “problem-solving strategy check” step confirms the utility of the models in a real-world application. Connecting episodes you might be interested in: No.021 - Interview with Bridgette Engler about the need for participatory design in foresight; No.058R - An adaptive learning process for developing and applying sustainability indicators with local communities; No.098R - Building social capital; You can find the transcript through this link. What wast the most interesting part for you? What questions did arise for you? Let me know on Twitter @WTF4Cities or on the wtf4cities.com website where the shownotes are also available. I hope this was an interesting episode for you and thanks for tuning in. Music by Lesfm from Pixabay
DTL S8A20 Toepassen van een Model-Based aanpak voor Predictive maintenanceIn deze aflevering van de De Dataloog | De Nederlandstalige podcast over data en AI Podcast, verkennen met deze keer de model based benadering van predictive maintenance binnen de railinfrastructuur. In deze boeiende sessie hebben we het genoegen om Dr. Ir. Annemieke Meghoe te verwelkomen, een expert in het veld van modelgebaseerde benaderingen voor het voorspellen van onderhoud.Dr. Meghoe's onderzoek belicht de rol van predictive maintenance in het realiseren van een duurzaam transportsysteem. Met een model based approach benadering combineert zij verminderde modellen, gebaseerd op eerste principes en omvangrijke datasets, om de falingskans of levensduur van spoorcomponenten te voorspellen. Deze aanpak integreert verschillende falingsmechanismen en verzamelt veldgegevens van treinen, het spoor, en de omgeving om een allesomvattende oplossing te bieden.In onze discussie duiken we dieper in hoe deze modellen en data bronnen ontwikkeld en op elkaar afgestemd worden om de uitdagingen in de spoorsector en daarbuiten aan te pakken. We verkennen de impact van haar werk op ProRail en de brede acceptatie van modelgebaseerde onderhoudsstrategieën. Bovendien gaan we in op de criteria voor data selectie, de combinatie van #faalmechanismen, het tijdrovende proces van het samenvoegen van deze elementen, en de rol van surrogaat- en meta-modellen in haar onderzoek.Wat een mooie verkenning van de technische en praktische aspecten van predictive maintenance, waarbij we de brug slaan tussen theorie en praktijk, en onderzoeken hoe Dr. Meghoe's baanbrekende aanpak bijdraagt aan het optimaliseren van onderhoudsstrategieën voor een van de meest duurzame transportnetwerken ter wereld.De Dataloog is de onafhankelijke Nederlandstalige podcast over data & kunstmatige intelligentie. Hier hoor je alles wat je moet weten over de zin en onzin van data, de nieuwste ontwikkelingen en echte verhalen uit de praktijk. Onze hosts houden het altijd begrijpelijk, maar schuwen de diepgang niet. Vind je De Dataloog leuk? Abonneer je op de podcast en laat een review achter.
Jason A. Churchill and Joe Doyle talk about Joe's recent mock draft, how the Dodgers land Tyler Glasnow, the state of the Yankees' farm system and how they add starting pitching, and what kind of arm the Orioles could land by using their top prospects.
Max Kolesnikov is a founder and CEO of MKS Technology, an embedded software and controls engineering firm. He has nearly 20 years of experience working in controls and software for real-time, safety-critical applications in automotive and industrial domains.Max offers embedded software and controls engineering consulting for automotive applications.Website: http://mks.technologyEmail: max.kolesnikov@mks.technologyLinkedIn: https://www.linkedin.com/in/max-kolesnikov-phd-9b41617/ You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.
IN THIS EPISODE...In today's digital age, safeguarding sensitive data is a non-negotiable requirement for every business owner. But navigating the labyrinth of cybersecurity can be daunting, and that's where our guest, Tracy Gregorio, comes in. As any business owner will tell you, protecting customer and employee data is paramount, and this is just the tip of the iceberg. For many industries, such as healthcare and finance, stringent requirements and regulatory demands make the cybersecurity landscape even more complex.Tracy Gregorio is the Chief Executive Officer of G2 Ops, Inc., an engineering firm specializing in model-based and cybersecurity systems engineering and strategic consulting. With a background in information technology, Tracy's strategic direction has enabled G2 Ops to provide tailored, cost-effective solutions in Model-Based and Cybersecurity Systems Engineering. These solutions are designed to address the ever-evolving threats posed by information warfare, catering to the unique needs of both government and commercial clients.------------Full show notes, links to resources mentioned, and other compelling episodes can be found at http://LeadYourGamePodcast.com. (Click the magnifying icon at the top right and type “Tracy”)Love the show? Subscribe, rate, review, and share! JUST FOR YOU: Increase your leadership acumen by identifying your personal Leadership Trigger. Take my free my free quiz and instantly receive your 5-page report. Need to up-level your workforce or execute strategic People initiatives? https://shockinglydifferent.com/contact or tweet @KaranRhodes.-------------ABOUT TRACY GREGORIO:Tracy Gregorio is CEO of G2 Ops, an IT engineering and cybersecurity company serving the U.S. Navy, government, and commercial enterprises. Her wide-ranging experiences include being a software engineer for the Navy, an analyst for a cable broadcast network, running enrollment management for an online university, and running a certified woman-owned firm recognized five years straight by Inc 5000 as one of our country's fastest-growing small businesses. She chairs the Cybersecurity Committee of the Virginia Ship Repair Association and served on the Executive Committee of the Virginia Commonwealth Cyber Initiative.WHAT TO LISTEN FOR:1. What is cybersecurity consulting for businesses?2. What is the role of gender in a male-dominated industry?3. What are the growth challenges in the cybersecurity landscape?4. Why is strategic decision-making significant?5. What are the key factors in defining success within a leadership role?FEATURED TIMESTAMPS:[04:35] Shattering Glass Ceilings in Cybersecurity: A Journey of Innovation[14:14] Leadership dynamics, growth challenges, and role of gender in a male-dominated industry[20:23] Drawing lessons from prior missteps[23:12] Signature Segment: Tracy's LATTOYG Tactics of Choice[26:37] Tracy's entry into the LATTOYG Playbook[28:06] How does Tracy prioritize self-care and manage personal well-being?[29:07] Tips and Encouragement for Aspiring Leaders and STEM Professionals[33:41] Signature Segment: Karan's TakeLINKS FOR TRACY:Website:
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List. 2023: Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Qin Liu, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huan, Tao Gui https://arxiv.org/pdf/2309.07864v3.pdf
Autonomous agents have long been a prominent research topic in the academic community. Previous research in this field often focuses on training agents with limited knowledge within isolated environments, which diverges significantly from the human learning processes, and thus makes the agents hard to achieve human-like decisions. Recently, through the acquisition of vast amounts of web knowledge, large language models (LLMs) have demonstrated remarkable potential in achieving human-level intelligence. This has sparked an upsurge in studies investigating autonomous agents based on LLMs. To harness the full potential of LLMs, researchers have devised diverse agent architectures tailored to different applications. In this paper, we present a comprehensive survey of these studies, delivering a systematic review of the field of autonomous agents from a holistic perspective. More specifically, our focus lies in the construction of LLM-based agents, for which we propose a unified framework that encompasses a majority of the previous work. Additionally, we provide a summary of the various applications of LLM-based AI agents in the domains of social science, natural science, and engineering. Lastly, we discuss the commonly employed evaluation strategies for LLM-based AI agents. Based on the previous studies, we also present several challenges and future directions in this field. To keep track of this field and continuously update our survey, we maintain a repository for the related references at https://github.com/Paitesanshi/LLM-Agent-Survey. 2023: Lei Wang, Chengbang Ma, Xueyang Feng, Zeyu Zhang, Hao-ran Yang, Jingsen Zhang, Zhi-Yang Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-rong Wen https://arxiv.org/pdf/2308.11432v1.pdf
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Model-based Approach to AI Existential Risk, published by Samuel Dylan Martin on August 25, 2023 on The AI Alignment Forum. Introduction Polarisation hampers cooperation and progress towards understanding whether future AI poses an existential risk to humanity and how to reduce the risks of catastrophic outcomes. It is exceptionally challenging to pin down what these risks are and what decisions are best. We believe that a model-based approach offers many advantages for improving our understanding of risks from AI, estimating the value of mitigation policies, and fostering communication between people on different sides of AI risk arguments. We also believe that a large percentage of practitioners in the AI safety and alignment communities have appropriate skill sets to successfully use model-based approaches. In this article, we will lead you through an example application of a model-based approach for the risk of an existential catastrophe from unaligned AI: a probabilistic model based on Carlsmith's Is Power-seeking AI an Existential Risk? You will interact with our model, explore your own assumptions, and (we hope) develop your own ideas for how this type of approach might be relevant in your own work. You can find a link to the model here. In many poorly understood areas, people gravitate to advocacy positions. We see this with AI risk, where it is common to see writers dismissively call someone an "AI doomer", or "AI accelerationist". People on each side of this debate are unable to communicate their ideas to the other side, since advocacy often includes biases and evidence interpreted within a framework not shared by the other side. In other domains, we have witnessed first-hand that model-based approaches are a constructive way to cut through advocacy like this. For example, by leveraging a model-based approach, the Rigs-to-Reefs project reached near consensus among 22 diverse organisations on the contentious problem of how to decommission the huge oil platforms off the Santa Barbara coast. For decades, environmental groups, oil companies, marine biologists, commercial and recreational fishermen, shipping interests, legal defence funds, the State of California, and federal agencies were stuck in an impasse on this issue. The introduction of a model refocused the dialog on specific assumptions, objectives and options, and led to 20 out of the 22 organisations agreeing on the same plan. The California legislature encoded this plan into law with bill AB 2503, which passed almost unanimously. There is a lot of uncertainty around existential risks from AI, and the stakes are extremely high. In situations like this, we advocate quantifying uncertainty explicitly using probability distributions. Sadly, this is not as common as it should be, even in domains where such techniques would be most useful. A recent paper on the risks of unaligned AI by Joe Carlsmith (2022) is a powerful illustration of how probabilistic methods can help assess whether advanced AI poses an existential risk to humanity. In this article, we review Carlsmith's argument and incorporate his problem decomposition into our own Analytica model. We then expand on this starting point in several ways to demonstrate elementary ways to approach each of the distinctive challenges in the x-risk domain. We take you on a tour of the live model to learn about its elements and enable you to dive deeper on your own. Challenges Predicting the long-term future is always challenging. The difficulty is amplified when there is no historical precedent. But this challenge is not unique; we lack historical precedent in many other areas, for example when considering a novel government program or a fundamentally new business initiative. We also lack precedent when world conditions change due to changes in technology, ...
Model-Based Definition or MBD can fundamentally transform how we visualize and understand design intent, by adding new dimensions to 2D drawings and pushing the boundaries of 3D CAD models. Learn how to get your organization to realize the benefits of MBD to bring GD&T (Geometric Dimensioning and Tolerancing), tolerance analysis, annotations, and even the PMI (Product Manufacturing Information) into the 3D CAD space.
Business | Why Consistency & Daily Diligence Is Key to Building SUPER SUCCESS + How to Create Duplicable Business Model Based Upon the Foundations of a Duplicable Process, a Winning Team & Well-Defined Guardrails Clay Clark Testimonials | "Clay Clark Has Helped Us to Grow from 2 Locations to Now 6 Locations. Clay Has Done a Great Job Helping Us to Navigate Anything That Has to Do with Running the Business, Building the System, the Workflows, to Buy Property." - Charles Colaw (Learn More Charles Colaw and Colaw Fitness Today HERE: www.ColawFitness.com) See the Thousands of Success Stories and Millionaires That Clay Clark Has Coached to Success HERE: https://www.thrivetimeshow.com/testimonials/ Learn More About How Clay Has Taught Doctor Joe Lai And His Team Orthodontic Team How to Achieve Massive Success Today At: www.KLOrtho.com Learn How to Grow Your Business Full THROTTLE NOW!!! Learn How to Turn Your Ideas Into A REAL Successful Company + Learn How Clay Clark Coached Bob Healy Into the Success Of His www.GrillBlazer.com Products Learn More About the Grill Blazer Product Today At: www.GrillBlazer.com Learn More About the Actual Client Success Stories Referenced In Today's Video Including: www.ShawHomes.com www.SteveCurrington.com www.TheGarageBA.com www.TipTopK9.com Learn More About How Clay Clark Has Helped Roy Coggeshall to TRIPLE the Size of His Businesses for Less Money That It Costs to Even Hire One Full-Time Minimum Wage Employee Today At: www.ThrivetimeShow.com To Learn More About Roy Coggeshall And His Real Businesses Today Visit: https://TheGarageBA.com/ https://RCAutospecialists.com/ Clay Clark Testimonials | "Clay Clark Has Helped Us to Grow from 2 Locations to Now 6 Locations. Clay Has Done a Great Job Helping Us to Navigate Anything That Has to Do with Running the Business, Building the System, the Workflows, to Buy Property." - Charles Colaw (Learn More Charles Colaw and Colaw Fitness Today HERE: www.ColawFitness.com) See the Thousands of Success Stories and Millionaires That Clay Clark Has Coached to Success HERE: https://www.thrivetimeshow.com/testimonials/ Learn More About Attending the Highest Rated and Most Reviewed Business Workshops On the Planet Hosted by Clay Clark In Tulsa, Oklahoma HERE: https://www.thrivetimeshow.com/business-conferences/ Download A Millionaire's Guide to Become Sustainably Rich: A Step-by-Step Guide to Become a Successful Money-Generating and Time-Freedom Creating Business HERE: www.ThrivetimeShow.com/Millionaire See Thousands of Actual Client Success Stories from Real Clay Clark Clients Today HERE: https://www.thrivetimeshow.com/testimonials/
Daniel Campbell is VP of Model-Based Definition at Capvidia. He has more than 20 years of experience in the field of digital metrology, software design, and model-based definition. He is also currently the Chair of the ANSI Working Group, and a member of the Board of Directors of the Dimensional Metrology Standards Consortium.In this episode learn about Model Based Definition (MBD) and how large companies are using it to streamline workflows and increase efficiency in manufacturing and metrology.Aaron Moncur, hostAbout Being An Engineer The Being An Engineer podcast is a repository for industry knowledge and a tool through which engineers learn about and connect with relevant companies, technologies, people resources, and opportunities. We feature successful mechanical engineers and interview engineers who are passionate about their work and who made a great impact on the engineering community. The Being An Engineer podcast is brought to you by Pipeline Design & Engineering. Pipeline partners with medical & other device engineering teams who need turnkey equipment such as cycle test machines, custom test fixtures, automation equipment, assembly jigs, inspection stations and more. You can find us on the web at www.teampipeline.us
In this week's SlatorPod, Fireflies.ai CEO Krish Ramineni joins us to talk about scaling the AI meeting assistant and building on the latest advances in large language models.Krish starts with his journey to co-founding Fireflies, which began as a drone delivery service and as a result of conversations with customers and investors, evolved into an AI meeting assistant to solve their own pain point.The CEO shares how they found their product-market fit after focusing on automated transcripts over human-assisted note-taking. He discusses the early days of AI investment and how with the rise of APIs and large language models (LLMs), you no longer need multiple PhDs to attract investors. Krish explains how Fireflies leverages technologies like Whisper to improve their language transcription, allowing them to be more accessible to global companies. He talks about their decision to improve their Super Summaries feature through GPT technology.The CEO shares his excitement about the potential for LLMs and how Fireflies are building a Chrome extension that uses LLMs to summarize any article or video on the internet. He advises that simply building a wrapper on top of OpenAI is not a defensible moat for companies, but rather you should build a unique platform with a unique angle into the industry you're selling to.Kirsh talks about the current fundraising environment where there is a lot of money being thrown around for generative AI companies, but only a few will weather the storm. When it comes to hiring machine learning talent, Krish doesn't believe in prompt engineering and also holds the view that machine learning companies may no longer need to hire large cohorts of ML PhDs to scale.The pod rounds off with the company's roadmap for 2023, which includes creating an ecosystem of extensions on top of Fireflies. These extensions will offer powerful functionalities to users in different sectors like healthcare and recruiting.
Learn how to grow and preserve your wealth through unbiased decision-making from Michael Episcope in today's episode as we look into current and future market trends, how they could affect the real estate market, and why now is a good time to invest in real estate.WHAT YOU'LL LEARN FROM THIS EPISODE What business models can protect your business in a downturnOrigin Multilytics: What it is and what it doesThings to consider during asset acquisition in today's marketWhy it's important to communicate with your investorsThe unexpected effects of working from homeRESOURCE/LINK MENTIONEDThe Psychology of Money by Morgan Housel | Paperback: https://amzn.to/3Tk09xI and Kindle: https://amzn.to/3SnUZQ5ABOUT MICHAEL EPISCOPEMichael is a co-founder and co-CEO of Origin Investments. He co-chairs the investment committee and oversees investor relations and capital raising in the company and with Michael's leadership, Origin has acquired $1 billion in equity under management and has executed more than $2.6 billion in real estate transactions in fast-growing markets throughout the United States.CONNECT WITH MICHAELWebsite: Origin InvestmentsEmail: michael@origininvestments.com | investorrelations@origininvestments.com CONNECT WITH USWant a list of top-rated real estate conferences, virtual meetups, and mastermind groups? Send Tate an email at tate@glequitygroup.com to learn more about real estate using a relational approach.Looking for ways to make passive income? Greenlight Equity Group can help you invest in multifamily properties and create consistent cash flow without being a landlord. Book a consultation call and download Tate's free ebook, "F.I.R.E.-Financial Independence Retire Early via Apartment Investing," at www.investwithgreenlight.com to start your wealth-building journey today!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plan for mediocre alignment of brain-like [model-based RL] AGI, published by Steve Byrnes on March 13, 2023 on The AI Alignment Forum. (This post is a more simple, self-contained, and pedagogical version of Post #14 of Intro to Brain-Like AGI Safety.) (Vaguely related to this Alex Turner post and this John Wentworth post.) I would like to have a technical plan for which there is a strong robust reason to believe that we'll get an aligned AGI and a good future. This post is not such a plan. However, I also don't have a strong reason to believe that this plan wouldn't work. Really, I want to throw up my hands and say “I don't know whether this would lead to a good future or not”. By “good future” here I don't mean optimally-good—whatever that means—but just “much better than the world today, and certainly much better than a universe full of paperclips”. I currently have no plan, not even a vague plan, with any prayer of getting to an optimally-good future. That would be a much narrower target to hit. Even so, that makes me more optimistic than at least some people. Or at least, more optimistic about this specific part of the story. In general I think many things can go wrong as we transition to the post-AGI world—see discussion by Dai & Soares—and overall I feel very doom-y, particularly for reasons here. This plan is specific to the possible future scenario (a.k.a. “threat model” if you're a doomer like me) that future AI researchers will develop “brain-like AGI”, i.e. learning algorithms that are similar to the brain's within-lifetime learning algorithms. (I am not talking about evolution-as-a-learning-algorithm.) These algorithms, I claim, are in the general category of model-based reinforcement learning. Model-based RL is a big and heterogeneous category, but I suspect that for any kind of model-based RL AGI, this plan would be at least somewhat applicable. For very different technological paths to AGI, this post is probably pretty irrelevant. But anyway, if someone published an algorithm for x-risk-capable brain-like AGI tomorrow, and we urgently needed to do something, this blog post is more-or-less what I would propose to try. It's the least-bad plan that I currently know. So I figure it's worth writing up this plan in a more approachable and self-contained format. 1. Intuition: Making a human into a moon-lover (“selenophile”) Try to think of who is the coolest / highest-status-to-you / biggest-halo-effect person in your world. (Real or fictional.) Now imagine that this person says: “You know what's friggin awesome? The moon. I just love it. The moon is the best.” You stand there with your mouth agape, muttering to yourself in hushed tones: “Wow, huh, the moon, yeah, I never thought about it that way.” (But 100× moreso. Maybe you're on some psychedelic at the time, or this is happening during your impressionable teenage years, or whatever.) You basically transform into a “moon fanboy” / “moon fangirl” / “moon nerd” / “selenophile”. How would that change your motivations and behaviors going forward? You're probably going to be much more enthusiastic about anything associated with the moon. You're probably going to spend a lot more time gazing at the moon when it's in the sky. If there are moon-themed trading cards, maybe you would collect them. If NASA is taking volunteers to train as astronauts for a trip to the moon, maybe you'd enthusiastically sign up. If a supervillain is planning to blow up the moon, you'll probably be extremely opposed to that, and motivated to stop them. Hopefully this is all intuitive so far. What's happening mechanistically in your brain? As background, I think we should say that one part of your brain (the cortex, more-or-less) has “thoughts”, and another part of your brain (the basal ganglia, more-or-less) assigns a “value” (in RL ter...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plan for mediocre alignment of brain-like [model-based RL] AGI, published by Steven Byrnes on March 13, 2023 on LessWrong. (This post is a more simple, self-contained, and pedagogical version of Post #14 of Intro to Brain-Like AGI Safety.) (Vaguely related to this Alex Turner post and this John Wentworth post.) I would like to have a technical plan for which there is a strong robust reason to believe that we'll get an aligned AGI and a good future. This post is not such a plan. However, I also don't have a strong reason to believe that this plan wouldn't work. Really, I want to throw up my hands and say “I don't know whether this would lead to a good future or not”. By “good future” here I don't mean optimally-good—whatever that means—but just “much better than the world today, and certainly much better than a universe full of paperclips”. I currently have no plan, not even a vague plan, with any prayer of getting to an optimally-good future. That would be a much narrower target to hit. Even so, that makes me more optimistic than at least some people. Or at least, more optimistic about this specific part of the story. In general I think many things can go wrong as we transition to the post-AGI world—see discussion by Dai & Soares—and overall I feel very doom-y, particularly for reasons here. This plan is specific to the possible future scenario (a.k.a. “threat model” if you're a doomer like me) that future AI researchers will develop “brain-like AGI”, i.e. learning algorithms that are similar to the brain's within-lifetime learning algorithms. (I am not talking about evolution-as-a-learning-algorithm.) These algorithms, I claim, are in the general category of model-based reinforcement learning. Model-based RL is a big and heterogeneous category, but I suspect that for any kind of model-based RL AGI, this plan would be at least somewhat applicable. For very different technological paths to AGI, this post is probably pretty irrelevant. But anyway, if someone published an algorithm for x-risk-capable brain-like AGI tomorrow, and we urgently needed to do something, this blog post is more-or-less what I would propose to try. It's the least-bad plan that I currently know. So I figure it's worth writing up this plan in a more approachable and self-contained format. 1. Intuition: Making a human into a moon-lover (“selenophile”) Try to think of who is the coolest / highest-status-to-you / biggest-halo-effect person in your world. (Real or fictional.) Now imagine that this person says: “You know what's friggin awesome? The moon. I just love it. The moon is the best.” You stand there with your mouth agape, muttering to yourself in hushed tones: “Wow, huh, the moon, yeah, I never thought about it that way.” (But 100× moreso. Maybe you're on some psychedelic at the time, or this is happening during your impressionable teenage years, or whatever.) You basically transform into a “moon fanboy” / “moon fangirl” / “moon nerd” / “selenophile”. How would that change your motivations and behaviors going forward? You're probably going to be much more enthusiastic about anything associated with the moon. You're probably going to spend a lot more time gazing at the moon when it's in the sky. If there are moon-themed trading cards, maybe you would collect them. If NASA is taking volunteers to train as astronauts for a trip to the moon, maybe you'd enthusiastically sign up. If a supervillain is planning to blow up the moon, you'll probably be extremely opposed to that, and motivated to stop them. Hopefully this is all intuitive so far. What's happening mechanistically in your brain? As background, I think we should say that one part of your brain (the cortex, more-or-less) has “thoughts”, and another part of your brain (the basal ganglia, more-or-less) assigns a “value” (in RL terminology) a....
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Model-Based Policy Analysis under Deep Uncertainty, published by Max Reddel on March 6, 2023 on The Effective Altruism Forum. This post is based on a talk that I gave at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies in a situation where we want to optimize for multiple objectives simultaneously. Summary Real-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving. Modeling can help policy-makers to navigate these complexities. Traditional modeling is ill-suited for this purpose. Systems modeling is a better fit (e.g., agent-based models). Deep uncertainty is everywhere. Deep uncertainty makes expected-utility reasoning virtually useless. Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties. Explorative modeling > predictive modeling. Value diversity (aka multiple objectives) > single objectives. Focus on finding vulnerable scenarios and robust policy solutions. Good fit with the mitigation of GCRs, X-risks, and S-risks. Complexity Complexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it. Ant Colonies Ant colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating with each other through chemical signals. However, the collective behavior of the colony is highly sophisticated, with complex networks of pheromone trails guiding the movement of the entire colony toward food sources and the construction of intricate structures such as nests and tunnels. The behavior of the colony is also highly adaptive, with the ability to respond to changes in the environment, such as changes in the availability of food or the presence of predators. Examples of Economy and Technology Similarly, the world is also a highly complex system, with a vast array of interrelated factors and processes that interact with each other in intricate ways. These factors include the economy, technology, politics, culture, and the environment, among others. Each of these factors is highly complex in its own right, with multiple variables and feedback loops that contribute to the overall complexity of the system. For example, the economy is a highly complex system that involves the interactions between individuals, businesses, governments, and other entities. The behavior of each individual actor is highly variable and can be influenced by a range of factors, such as personal motivations, cultural norms, and environmental factors. These individual behaviors can then interact with each other in complex ways, leading to emergent phenomena such as market trends, economic growth, and financial crises. Similarly, technology is a highly complex system that involves interactions between multiple components, such as hardware, software, data, and networks. Each of these components is highly complex in its own right, with multiple feedback loops and interactions that contribute to the overall complexity of the system. The behavior of the system as a whole can then be highly unpredict...
In this week's episode Greg and Patrick revisit a topic they addressed in their 2nd-ever episode: statistical power. Here they continue their discussion by attempting to clarify the power of what, and they explore ways of obtaining meaningful power estimates using the structural equation modeling framework. Along the way they also discuss tearing arms off, German dentists, booby prizes, Dr. Strangelove, making it look like an accident, shrug emojis, the whale petting machine, baseball and war, where's Waldo, whale holes, the big R-squared, throwing reviewers against the wall, DIY power, in fairness to me, eggplants, and screw you guys, I'm going home. Stay in contact with Quantitude! Twitter: @quantitudepod Web page: quantitudepod.org Merch: redbubble.com
In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Dr. Sam Procter and Lutz Wrage, researchers with the SEI, discuss the Guided Architecture Trade Space Explorer (GATSE), a new SEI-developed model-based tool to help with the design of safety-critical systems. The GATSE tool allows engineers to evaluate more design options in less time than they can now. This prototype language extension and software tool partially automates the process of model-based systems engineering so that systems engineers can rapidly explore combinations of different design options.
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
Are model-based testing and record and configure-based testing mutually exclusive, or can they be used together to provide a comprehensive testing approach? In today's episode, Matthias Rapp, a test automation and Tricentis veteran, and Shawn Jaques, the Director of Product Marketing at Tricentis, discuss model-based testing and record and configure-based testing. We explore the differences between these two testing methods and when to use one over the other. We also discuss how they can work together and how AI and data-driven testing fit into these paradigms. Tune in to learn more about these testing techniques and how they can help ensure the quality and reliability of your systems. Check out Model-based testing in the cloud yourself: https://www.tricentis.com/products/tricentis-test-automation
There's a lot of talk about Model-Based Definition (MBD), which means there's also a lot of different opinions out there. It's one thing to paint a picture of some future vision for a Model-Based Enterprise—but how can you start taking steps today to make MBD practical for your team? That's exactly what the Action Engineering team does. Jennifer Herron, Founder and CEO, and Rhiannon Gallagher, Chief Business Psychologist, join Adam Keating to talk all about MBD and what they've seen working with organizations to achieve MBD and MBE goals. In this episode, we discuss: Why bother with MBD at all The challenges of shifting toward MBD How psychological safety impacts manufacturing organizations MBD and supply chain relationships More information about Jennifer Herron and Rhiannon Gallagher and today's topics: Jennifer Herron: https://www.linkedin.com/in/jennifer-herron-cad/ Rhiannon Gallagher: https://www.linkedin.com/in/rhiannongallagher/ Action Engineering: https://www.action-engineering.com/ Peer Check Homepage: https://www.colabsoftware.com/podcast/peer-check To hear this interview and more like it, subscribe to Peer Check! Find us on Apple Podcasts, Spotify, or our website—or just search for Peer Check in your favourite podcast player.
This week's construction tech news with Jeff Sample (@IronmanofIT), and Lonnie Cumpton (@LonnieCumpton) Featuring: - Interview with John Theis from Bidlight - Construction Tech News of the Week Follow @TheConTechCrew on social media for more updates and to join the conversation! Listen to the show at http://thecontechcrew.com Powered by JBKnowledge Learn more at http://thecontechcrew.com or follow @JBKnowledge & @TheConTechCrew on Twitter.
To subscribe: Critical Point Podcast $27.99 per month recurring. Billing begins two weeks from signup (a form of 2-week free trial). Cancel anytime. Primary focus on the US stock market and the major grains. Short-term to super-long-term forecasts from the business cycle model, including signals. Additional model-based opinion/signals include global stock market indexes, interest rates, dollar, bitcoin, gold, oil, the boom/bust cycle of the economy, and the cyclical climate events that can cause crop problems. For information, education, explanation see criticalpointpod.com. Email: rich@ag-financial.com Twitter: @rich_posson
Learn how team leaders should choose the best real estate team compensation model based on a variety of different real estate team organizational structures.
As your life changes, the way you run your business will also have to change. It's one thing to run your gym when you're single. It's another to do it when you're in a serious relationship. And it's a completely different game when you're starting a family. Each of life will require different things from you. They will all require that you give less time to the gym and more time to your life outside the gym. In this episode, we dive into the business model you need to have in mind as your stage in life change. If you're interested in working with us, head to www.factoryforged.com/call
In this week's The Faces of Business Episode, our guest speaker was Jennifer Herron. Jennifer is the Founder and CEO of Action Engineering LLC. Jennifer helps companies understand how adopting a model-based approach can help businesses run more effectively. You can find out more about us on our website You can visit our blog page for this episode Email us for more information info@exityourway.us
Is it really your strokes that are holding you back from your best tennis? In this episode, Tennis Canada Level 4 coach Wayne Elderton joins the show and dives deep into ‘model' vs ‘game-based' coaching - and why a game-based or 'tactical' approach to coaching is more effective & efficient when it comes to learning. Wayne and I also discuss the shot cycle, the various tennis 'situations', scaling courts + equipment for young/beginner players and his take on practices that emphasize 0-4 shots.
In episode 13 of The Gradient Podcast, we interview Stanford Professor Chelsea FinnSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSChelsea is an Assistant Professor at Stanford University. Her lab, IRIS, studies intelligence through robotic interaction at scale, and is affiliated with SAIL and the Statistical ML Group. I also spend time at Google as a part of the Google Brain team. Her research deals with the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction.Links:Learning to Learn with GradientsVisual Model-Based Reinforcement Learning as a Path towards Generalist RobotsRoboNet: A Dataset for Large-Scale Multi-Robot LearningGreedy Hierarchical Variational Autoencoders for Large-Scale VideoExample-Driven Model-Based Reinforcement Learning for Solving Long-Horizon Visuomotor Tasks Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
In Part 2 of this podcast series, IpX Director of Model Based Enterprise, Max Gravel, and Vertex Product Marketing Manager, John Heller, continue tearing down misconceptions and myths to starting a digital twin journey with further exploration into:- How open access to 3D data encourages collaboration and reward that collaboration to improve company culture - Creating a customer-centric organization with direct product feedback and improvements to engineering - A focus on downstream users and the ability for faster decision making without internal conflict- How and why to jumpstart model based definition initiatives to move to a model based enterprise- How to jumpstart model based definition inside an organization by leveraging data across the enterprise and looking for solutions that free that data in a secure way- Finding solutions that allow you to free 3D data and augment innovative business applications that accelerate digital transformationRead the referenced blog, “Connect Your Enterprise: How To Overcome Challenges in Change Management & 3D Visualization” coauthored by Max and John from November 2020
In this 2-part podcast series, IpX Director of Model Based Enterprise, Max Gravel, and Vertex Product Marketing Manager, John Heller, dive into the model based world to tear down misconceptions and myths to starting a digital twin journey. They illustrate how data visibility throughout an enterprise accelerates business goals and unifies departments. Throughout the series, hear John and Max discuss:How digital twins go beyond engineering and support the overall user experience when representing 3D data with full traceabilityHow 3D data acts as a company's universal language spanning departments, company culture and locationsThe immediate value from breaking down silos in change and increasing cross-functional collaboration Ways for your organization to begin breaking down silos to share 3D data and adopt a digital mindsetHow and why to jumpstart model based definition initiatives to move to a model based enterpriseRead the referenced blog, “Connect Your Enterprise: How To Overcome Challenges in Change Management & 3D Visualization” coauthored by Max and John from November 2020.
Teaching Science in Diverse Classrooms: Real Science for Real students
Don't throw away those owl pellets just yet.
Todd Campbell shares with us successful strategies and resources to use when implementing model based inquiry in your NGSS classroom. He describes ways to guide students through the sense making process. Check out the show notes for links to unit templates with this framework as well as other useful resources mentioned in the show: www.ngsnavigators.com/blog/031
5-29-18 Tonight we're getting with Billy Beltz, award-winning mead maker. Billy Beltz is the Co-Founder of Lost Cause Meadery located in in San Diego, CA. Billy and his wife Suzanna opened the doors to Lost Cause in November of 2017. After only six months of being open they have already amassed several national awards for their meads including medals at the Mazer Cup International, the San Diego International Beer Competition, and the CA State Fair Wine Competition. Prior to opening the meadery Billy was an award-winning home mead maker with over 34 medals for his mead including four Mazer Cup awards. He also had his research on ale yeast strains for mead making published in American Mead Maker and Zymurgy, and is a BJCP Certified Mead Judge. Lost Cause takes pride in crafting delicious, complex and slightly carbonated meads that showcase unique honey varietals and a passion for experimentation. The meadery is located in a shared space with a cidery (Serpentine Cider) and a scratch kitchen (The Good Seed Food Co.). If you want us to tackle your mead making questions, you can send us a question and we'll tackle it online! Join us on live chat during the show Bring your questions and your mead, and let's talk mead! You can call us at 803-443-MEAD (6323), or Skype us at meadwench (please friend me first and say you're a listener, I get tons of Skype spam), or tweet to @gotmeadnow. This player will show the most recent show, and when we're live, will play the live feed. If you are calling in, please turn off the player sound, so we don't get feedback. Click here to see a playable list of all our episodes! Show links and notes Billy had a couple questions he asked in the AMMA group before the show. Here are the questions and what people had to say: What, if anything, is or should be sacred in mead making? (think use of certain ingredients or processes to make mead, is there anything you feel you'll never do, or does it just matter what the final product tastes like) Meads made for for mass appeal (sales) vs competitions (win awards) vs hype/ratings (like Untappd). Are these often the same or different meads? Why or why not? Carvin Wilson: 1 – Beside using a certain percentage of honey, I feel nothing should be sacred. You should not limit your palate, recipe design, or exploration based upon what others say. Exploration is one of the main components to innovation, so break rules and never stop asking what if or why. Item 2 - It's nice when a mead can cover all three categories, but from talking with a lot of professional mead makers it seems what sells is not always what wins awards or carries a lot of hype. As a business owner, it's important to keep your target audience in mind and that's not always judges or hype buyers. One of the reason I think a mead that sells well does not do good in competitions is it does not fit nicely into style guidelines. Another reason is judging will always be subjective, you must have a good mead on the right day in front on the right judges. With a hype mead, not only do you need a solid mead, but you really need to have your social media and fan base game going strong to pull something of that nature off. There are a lot of meaderies making solid mead, but they are not paying attention to the social media aspect; thinking that their mead is good enough on it's on to get hype. Alex Gonzalez: #2 - I think there is a lot of gray area there, and most of the time at least 2 of those will over lap. I personally pull from all 3 categories, though "mass appeal" & "untapped rating" are hard to separate for reasons that Sean mentioned. I pull inspiration locally from our surroundings, which includes produce/honey, cultural (both the communities and my own), as well as the local brewing community. What works and sells locally may very well be a mead in the low to mid-80s at Mazer, but your community is already acclimated to your style and flavor profiles leading to an average of 4+ on Untappd. It's definitely a fun conversation to have. I think really displays market variation well and how there is no one path that a mead maker needs to go down to ensure they are successful. Andrew Geffken: 1) Nothing. There are lots of meads that I personally don't like taste-wise, but I respect the producer for just going for it and continuing to experiment. Being open-minded is the best way to learn. 2) I think you can have the same for any combination except for mass appeal and hyped/rated. Look at the ratings for not just meads, but the Treehouse/Hill Farmstead compared to the mass appeal beers. To make the really hyped ones usually requires expensive ingredient costs or aging times that preclude it from being available broadly. Sean Grant: 1) I will never boil the water that has honey in it 2) I know a few people who put a lot of preference on Untapped for deciding what to buy when they haven't had something before...especially when it comes to mead due to the perception that people have for mead (i.e it is strong, super sweet and 'that is the stuff at the RenFair'). Aaron Schavey: 2.) imo just comes down to branding. A lot of the hard to get /hype stuff you don't see being entered in competitions. Not certain reasons behind not entering but I know some brands choose not to enter competitions for their own reasons. I've had a lot of really fantastic mead from the hype harder to get category as well as the competition side. Some of the very best mead I've had have been made from some of the Mazer cup winners in the home brew sector. Amy Drew Hasle: 1. I feel extremely strongly that the primary ingredient be honey, and that no candy or sucrose make a major contribution. I absolutely think that there should be a labeling restriction that only fermented beverages with 51% or more honey be labeled as Mead. 2. I chuckle at that question. HoneyRun stopped entering mead competitions because our style highlights the fruit, you can't leave honey off the entry description, and there isn't a "correct" BJCP balance of ingredients in our melomels that have fed the fam and kept gas in the car for 20 years. Peter James Schultz - Item 1 - as soon as there is something you will never do, there is a portion of the consumer base you will never capture. Be closed minded at your own peril. Item 2 - different meads IMO. To echo what Patty said above: The best thing a Meadery can do is have a diverse portfolio of flavors. Offering your consumers meads varying in sweetness, abv, acidity, etc will allow you to accommodate more people than if you only sell one type of mead. You can have your session mead for quick inventory turnover, your single origin honey with high quality fruit to win awards and a barrel aged mead with whichever fruit is trendy at the time for your hype mead. Keith Weidemann: 1) To me it's all about the end product. It doesn't matter what you use as long as you got what you were aiming for, and if you didn't, turn it into a special one -off. Some times those are the best ones lol. I've had the debate about mead being at least 51% honey with myself a lot. Because if other ingredients are dominate then to me it's whatever with honey added. That being said, if you want to call it mead who am I to say its not, you made it, and I'll drink it. 2) I think if your doing things like Billy then they are the same and should be what you're striving for. But a question I have for this is, if you do enter a competition, are you entering a large batch pull off, or do you make a small batch just for the entry so you can control everything more precise and lower the risk of bad marks? Adam Thompson - #1) I hold nothing sacred when making mead and I don't think anybody else should either. #2) They can be the same but they don't have to be. I think of it like commercial beer where most beers on shelves would score low to mid 30's if blindly entered into a homebrew competition but they still sell well for a variety of reasons. Jeff Katra - 1. I think the only thing sacred in mead making is not adding any refined sugars. I can understand secondary types of sugar like maple/etc when making a certain style where it's called for, but back sweetening with granulated sugar would be a sin. 2. The dynamic between mass appeal/hype versus actual style and quality is very subjective. Personally I think that every product should at its core be of the highest quality. While not every batch is perfect it should have been the result of meticulous process and passion for making it. With the competition in the market, excellent branding and positioning in the market place is almost necessary. What makes me sad or angry is when I see meads that are incredible fall by the way side due to poor marketing. And even worse mediocre meads that were over hyped. Scott Lab Fermentation Handbook Biomass Content Governs Fermentation Rate in Nitrogen-Deficient Wine Must Sequential Use of Nitrogen Compounds by Saccharomyces cerevisiae during Wine Fermentation: a Model Based on Kinetic and Regulation Characteristics of Nitrogen Permeases Management of Multiple Nitrogen Sources during Wine Fermentation by Saccharomyces cerevisiae Wine secondary aroma: understanding yeast production of higher alcohols Metabolic and transcriptomic respond of the wine yeast Saccharomyces cerevisiae strain EC1118 after an oxygen impulse under carbon-sufficient, nitrogen-limited fermentative conditions Effect of low temperature fermentation and nitrogen content on wine yeast metabolism Why, When and How to Measure YAN Chemistry in Winemaking Fermentation Management Practices Altered Fermentation Performances, Growth,
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
In this episode, we'll be Test Talking with Michael “Fritz” Fritzius, the founder of Arch DevOps, about model-based testing. Discover what model-based testing is, how it works, when to use it and who's a good candidate for it.