POPULARITY
Peter is a seasoned People/HR Leader with a track record of aiding startups in achieving sustainable and rapid growth. He emphasizes the significance of establishing a strong foundation, likening it to building a structure. Having contributed to the expansion of companies such as Catawiki, Delivery Hero, Glovo, REMY Robotics, and Docplanner, Peter excels in fostering empowering experiences and solid frameworks. His proficiencies encompass diverse areas like People analytics, Business unit management, Talent development, HR Operations, Digital Transformation, Performance management, Compensation & Benefits, and Leadership growth.Shownotes00:00 - The Challenge of Disconnected HR Systems03:11 - Navigating HR Tools for Startups05:58 - Data Flow and Workforce Planning09:08 - Streamlining Data Collection for New Hires12:02 - Integrating Systems for Efficient HR Operations15:00 - Leveraging Data for People Analytics18:02 - The Importance of Macro and Micro Data20:59 - AI in Recruitment and Performance Management24:00 - Building a Comprehensive Hiring Framework27:00 - Connecting Startups with HR SolutionsLinksPeter Linkedin https://www.linkedin.com/in/petervankersen/Thomas Linkedin: https://www.linkedin.com/in/thomas-kohler-pplwise/Thomas e-mail: thomas@pplwise.compplwise: https://pplwise.com/
Highlights from this week's conversation include:The Impact of AI (1:25)Historical Context of Technology (2:31)Pre-existing Infrastructure for Change (4:42)AI as a Personal Assistant (7:10)Future of Company Roles (9:13)Managing Teams in a Dystopian AI Future (12:31)Business Architecture Choices (15:52)Integration Tool Usage (18:07)AI's Impact on Data Roles (21:53)AI as an Interface (24:04)Trust in AI vs. SQL (27:12)Snowflake's Acquisition of Dataflow (29:54)Regression to the Mean Concept (33:49)AI's Role in Data Platforms (37:04)User Experience in Data Tools (44:41)Future of Data Tools (46:57)Environment Variable Setup (51:10)Future of Software Implementation and Parting Thoughts (52:10)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Join Dan Vega and DaShaun Carter for the latest updates from the Spring Ecosystem. In this episode, Dan and DaShaun are joined by Spring Cloud engineer and Spring Cloud Data Flow expert, Corneil du Plessis. Spring Cloud Data Flow is a powerful platform for data workflows. You can participate in our live stream to ask questions or catch the replay on your preferred podcast platform.Show NotesSpring Cloud Data FlowCorneil du Plessis
Hi, Spring fans! In this installment, I talk to Spring Cloud Dataflow, Spring Cloud Task, and Spring Batch legend Glenn Renfro
In our conversation with Andra Bria, Marketing Manager at Medicai, we explore the cutting-edge world of medical imaging data. Imagine a future where accessing, retrieving, and sharing medical images becomes seamless—regardless of geographical location or existing systems. Medicai is at the forefront of this transformation, creating secure pathways for physicians and patients alike. From overcoming CD-related delays to enabling care coordination, Medicai ensures that critical data flows effortlessly. Tune in to discover how data-driven healthcare is shaping the future, one image at a time.
Artur spricht mit Steffen über die Entwicklung komplexer Datenstrukturen, die Vorteile der Echtzeitnutzung von Power BI, Advanced Data Preparation, Fabric sowie Steffens Erfahrungen mit Direct Lake. Als Head of Analytics im eCommerce ist Steffen im wahrsten Sinne des Wortes Data Driven und treibt neue Themen aktiv voran. Bei AXRO ist der Bedarf nach Real-Time echt und die Themen und das Team rund um BI mit der Zeit stark gewachsen. Self-Service wird sehr aktiv gelebt, die BI Nutzungsquote erreicht die 50% aller Beschäftigten und neue Techniken werden zeitnah eingeführt. Auch in leitender Position hat Steffen weiterhin einen starken Bezug zur Praxis, was sofort zu merken ist, wenn er mit Artur über die Architektur aus Dataflows und einem Data Warehouse spricht und die Mehrwerte durch die neuen Möglichkeiten von Fabric klar hervorhebt. Steffen hat BWL und VWL in Kiel studiert und dabei eine Passion für das Thema Datenanalyse entwickelt. Seit er 2010 mit Powerpivot begonnen hat, ist er ständig im Microsoft Data Bereich aktiv gewesen und hat sich ein kompetentes Team aus Data Analysten und Data Engineers aufgebaut, die täglich solche spannenden Dinge wie DataVaults, Sternschemas und Measures usw. realisieren. Das er direkt gegenüber einem Cafe namens Fabric wohnt, hält er für keinen Zufall
Florian Christ, a serial entrepreneur and tech investor, shares his journey and insights on success and building successful companies. He emphasises the importance of teamwork, creating value, and having a positive impact. Florian started his entrepreneurial journey at age 18, building IT companies and later transitioning to software companies. He believes in trusting his team and giving them the freedom to create their own stories within the startups. Florian's companies focus on making data usable and creating products that simplify processes and solve pain points for customers. He has expanded his businesses globally, working with partners in different countries. Florian believes in sharing his ideas with others and gauging their interest and feedback before pursuing them. He values the feedback and energy of people to determine if an idea is worth pursuing. He talks to anyone and everyone about his ideas, regardless of their background or expertise. Florian emphasises the importance of learning and continuously expanding one's knowledge. He has pursued various educational programs, including Harvard and Tony Robbins, to gain a broader understanding of international business and human behaviour. Florian's vision is to have a larger impact by helping more people become entrepreneurs and create products they love. He aims to connect companies and automate data flow between them to improve business processes. Florian values time, both his own and that of others, and believes in giving freedom and space to his team to grow and make decisions.Resources we discuss:The Almanac of Naval Ravikant: https://amzn.to/3ypkk7MJam is doing the CEO sleepout to raise money for Vinnies. To support him, donate here or share to your network!Our Library - What we are currently reading:Dylan - Stronger by Dinesh Palipana: https://amzn.to/4bUOnmrFollow us on all your favourite platforms:https://linktr.ee/thequestforsuccess Every 5 star review helps the show more than you can imagine : )
SIEM has been a cybersecurity standard as far back as 2005. Its job is to collect, correlate and analyze data from disparate sources for use in risk detection and forensic investigation. Over the last two decades SIEM has evolved to leverage advances like big data analytics and AI to process more data faster.But today's guest says SIEM has a fatal flaw.Chris Jordan, CEO of Fluency Corp., says SIEM's database-centric design causes critical delays in early detection and rapid response.What's the solution? Stay tuned as Jordan shares breakthroughs in streaming analytics and the future of SIEM. Learn how Overwatch Managed XDR is truly the next generation of SIEM in cybersecurity: https://www.highwirenetworks.com/services/open-xdr/ To get more cybersecurity news from High Wire Networks, visit: https://www.highwirenetworks.com/news-events/ To learn more about the Cybersecurity Simplified Podcast and to browse previous episodes, visit:https://www.highwirenetworks.com/cybersecurity-podcasts/Have an inquiry or topic request, reach out to: podcast@highwirenetworks.com
In our conversation with Andra Bria, Marketing Manager at Medicaid, we explore the cutting-edge world of medical imaging data. Imagine a future where accessing, retrieving, and sharing medical images becomes seamless—regardless of geographical location or existing systems. Medicaid is at the forefront of this transformation, creating secure pathways for physicians and patients alike. From overcoming CD-related delays to enabling care coordination, Medicaid ensures that critical data flows effortlessly. Tune in to discover how data-driven healthcare is shaping the future, one image at a time.
This week on the Expert Voices podcast, Randy Wootton, CEO of Maxio, speaks with Anthony Nitsos, Founder and CEO of SaaS Gurus. With a background that intriguingly intertwines medical school education and extensive expertise in finance, Anthony has over 25 years of experience building SaaS financial models. Randy and Anthony discuss principles for building and scaling backend financial systems that can transition from startup to scaleup without creating a "tech pile." By drawing parallels between human anatomy and business functionalities, Anthony emphasizes the significance of a single source of truth in accounting systems and the critical nature of precise and actionable data for organizational leadership. Quotes“If your spine and your core muscles are not strong, the rest of your body hanging off of that isn't going to be. The same thing works in the back office of finance systems. You need to have a rock-solid accounting system that ties to a rock-solid forecasting system that both feed into a KPI system. That is our three-part, financial core.” -Anthony Nitsos [17:00]“One big challenge is creating that one holistic view of the customer and product data that then helps inform not just the statements, but also the operating reports that you're using to run the business. Understanding what's going on with churn, understanding what's going on with retention. There are some exceptions, but you have to go and figure it out, you have to recode it. I think early-stage companies are just scrambling to get product out.” -Randy Wooton [14:39]Expert Takeaways Interconnected Thinking: Addressing root causes and systemic connections in finance and operations in SaaS companies.The Financial Core: A strong, streamlined accounting system tied to a precise KPI system is crucial for providing actionable intelligence to executive teams.Tech Stack vs. Tech Pile: Creating a proper technology stack rather than a disjointed tech pile for seamless data flow and informed decision-making.Cash is King: Accurate cash flow forecasting is paramount and enables CEOs to plan effectively for growth or investment.Building Valuation: The ultimate objective of a CFO in the venture-backed SaaS environment is to assist companies in growing their valuation and preparing for successful exits or investor rounds.Timestamps(01:20) Anthony Nitsos: Applying medical training to problem-solving(08:57) Preventing, detecting, and correcting in-process optimization(13:30) Need for a unified customer list for integrated analysis(19:05) Providing actionable intelligence as a strategic enabler(26:13) The significance of accurate revenue recognition for SaaS companies(34:30) The CFO: “Cash Flow Oracle”(41:06) Projecting cash and analyzing it(47:05) What is the mission of the CFO?LinksMAXIOUpcoming EventsMaxio Institute Report
In this presentation, I provide a thorough exploration of how dataflow analysis serves as a formidable method for discovering and addressing cybersecurity threats across a wide spectrum of vulnerability types. For instance, I'll illustrate how we can employ dynamic information flow tracking to automatically detect "blind spots"—sections of a program's input that can be changed without influencing its output. These blind spots are almost always indicative of an underlying bug. Furthermore, I will demonstrate how the use of hybrid control- and dataflow information in differential analysis can aid in uncovering variability bugs, commonly known as "heisenbugs." By delving into these practical applications of dataflow analysis and introducing open-source tools designed to implement these strategies, the goal is to present practical steps for pinpointing, debugging, and managing a diverse array of software bugs. About the speaker: Dr. Evan Sultanik is a principal computer security researcher at Trail of Bits. His recent research covers language-theoretic security, program analysis, detecting variability bugs via taint analysis, dependency analysis via program instrumentation, and consensus protocols for distributed ledgers. He is an editor of and frequent contributor to the offensive computer security journal "Proof of Concept or GTFO." Prior to joining Trail of Bits, Dr. Sultanik was the Chief Scientist at Digital Operatives and, prior to that, a Senior Research Scientist at The Johns Hopkins Applied Physics Laboratory. His dissertation was on the discovery of a family of combinatorial optimization problems the solutions for which can be approximated constant factor of optimal in polylogarithmic time on a parallel computer or distributed system. This was a surprising result since many of the problems in the family are NP-Hard. In a life prior to academia, Evan was a professional software engineer.
Join us on an insightful data-driven journey through the travel ecosystem! Our guest, Natalie Seatter, Chief Data and Product Officer at OAG, uncovers how data impacts different entities in the travel space, including airports, hedge funds, finance companies, governments and more.
Sam Selikoff is the founder of Build UI, Inc. They unpack a myriad of discussions surrounding JavaScript and its applications. They delve into topics such as RPC resurgence, React server components, and the challenges and solutions around integrating design and components. A variety of technical concepts, tools, and frameworks, including Tailwind, Redux, and Remix, are also explored. Additionally, the episode touches upon important mental health conversations, personal experiences, and the pitfalls of fragmented media subscriptions. SponsorsChuck's Resume Template Developer Book Club Become a Top 1% Dev with a Top End Devs MembershipSocialsTwitter: @samselikoffPicksAJ - No BackendAJ - Home AssistantAJ - CloudFreeAJ - AmeriDroidAJ - Chaos WalkingDan - Blue Eye SamuraiDan - Samurai JackSam - Lessons in ChemistrySupport this podcast at — https://redcircle.com/javascript-jabber/donationsPrivacy & Opt-Out: https://redcircle.com/privacy
In Honor of Data Privacy Week 2024, we're publishing a special episode. Instead of interviewing a guest, Debra shares her 'Top 20 Privacy Engineering Resources' and why. Check out her favorite free privacy engineering courses, books, podcasts, creative learning platforms, privacy threat modeling frameworks, conferences, government resources, and more.DEBRA's TOP 20 PRIVACY ENGINEERING RESOURCES (in no particular order)Privado's Free Course: 'Technical Privacy Masterclass'OpenMined's Free Course: 'Our Privacy Opportunity' Data Protocol's Privacy Engineering Certification ProgramThe Privacy Quest Platform & Games; Bonus: The Hitchhiker's Guide to Privacy Engineering'Data Privacy: a runbook for engineers by Nishant Bhajaria'Privacy Engineering, a Data Flow and Ontological Approach' by Ian Oliver'Practical Data Privacy: enhancing privacy and security in data' by Katharine JarmulStrategic Privacy by Design, 2nd Edition by R. Jason Cronk'The Privacy Engineer's Manifesto: getting from policy to code to QA to value' by Michelle Finneran-Dennedy, Jonathan Fox and Thomas R. Dennedy USENIX Conference on Privacy Engineering Practice and Respect (PEPR)IEEE's The International Workshop on Privacy Engineering (IWPE)Institute of Operational Privacy Design (IOPD)'The Shifting Privacy Left Podcast,' produced and hosted by Debra J Farber and sponsored by PrivadoMonitaur's 'The AI Fundamentalists Podcast' hosted by Andrew Clark & Sid MangalikSkyflow's 'Partially Redacted Podcast' with Sean FalconerThe LINDDUN Privacy Threat Model Framework & LINDDUN GO Card GameThe Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework & PLOT4ai Card GameThe IAPP Privacy Engineering SectionThe NIST Privacy Engineering Program Collaboration SpaceThe EDPS Internet Privacy Engineering Network (IPEN)Read “Top 20 Privacy Engineering Resources” on Privado's Blog. Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnTRU Staffing Partners Top privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Oracle's next-gen cloud platform, Oracle Cloud Infrastructure, has been helping thousands of companies and millions of users run their entire application portfolio in the cloud. Today, the demand for OCI expertise is growing rapidly. Join Lois Houston and Nikita Abraham, along with Rohit Rahi, as they peel back the layers of OCI to discover why it is one of the world's fastest-growing cloud platforms. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, Kiran BR, Rashmi Panda, David Wright, the OU Podcast Team, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Lois: Welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Principal Technical Editor. Nikita: Hi there! You're listening to our Best of 2023 series, where over the next few weeks, we'll be revisiting six of our most popular episodes of the year. 00:47 Lois: Today is episode 2 of 6, and we're throwing it back to our very first episode of the Oracle University Podcast. It was a conversation that Niki and I had with Rohit Rahi, Vice President, CSS OU Cloud Delivery. During this episode, we discussed Oracle Cloud Infrastructure's core coverage on different tiers. Nikita: But we began by asking Rohit to explain what OCI is and tell us about its key components. So, let's jump right in. 01:14 Rohit: Some of the world's largest enterprises are running their mission-critical workloads on Oracle's next generation cloud platform called Oracle Cloud Infrastructure. To keep things simple, let us break them down into seven major categories: Core Infrastructure, Database Services, Data and AI, Analytics, Governance and Administration, Developer Services, and Application Services. But first, the foundation of any cloud platform is the global footprint of regions. We have many generally available regions in the world, along with multi-cloud support with Microsoft Azure and a differentiated hybrid offering called Dedicated Region Cloud@Customer. 01:57 Rohit: We have building blocks on top of this global footprint, the seven categories we just mentioned. At the very bottom, we have the core primitives: compute, storage, and networking. Compute services cover virtual machines, bare metal servers, containers, a managed Kubernetes service, and a managed VMWare service. These services are primarily for performing calculations, executing logic, and running applications. Cloud storage includes disks attached to virtual machines, file storage, object storage, archive storage, and data migration services. 02:35 Lois: That's quite a wide range of storage services. So Rohit, we all know that networking plays an important role in connecting different services. These days, data is growing in size and complexity, and there is a huge demand for a scalable and secure approach to store data. In this context, can you tell us more about the services available in OCI that are related to networking, database, governance, and administration? 03:01 Rohit: Networking features let you set up software defined private networks in Oracle Cloud. OCI provides the broadest and deepest set of networking services with the highest reliability, most security features, and highest performance. Then we have database services, we have multiple flavors of database services, both Oracle and open source. We are the only cloud that runs Autonomous Databases and multiple flavors of it, including OLTP, OLAP, and JSON. And then you can run databases and virtual machines, bare metal servers, or even Exadata in the cloud. You can also run open source databases, such as MySQL and NoSQL in the Oracle Cloud Infrastructure. 03:45 Rohit: Data and AI Services, we have a managed Apache Spark service called Dataflow, a managed service for tracking data artifacts across OCI called Data Catalog, and a managed service for data ingestion and ETL called Data Integration. We also have a managed data science platform for machine learning models and training. We also have a managed Apache Kafka service for event streaming use cases. Then we have Governance and Administration services. These services include security, identity, and observability and management. We have unique features like compartments that make it operationally easier to manage large and complex environments. Security is integrated into every aspect of OCI, whether it's automatic detection or remediation, what we typically refer as Cloud Security Posture Management, robust network protection or encryption by default. We have an integrated observability and management platform with features like logging, logging analytics, and Application Performance Management and much more. 04:55 Nikita: That's so fascinating, Rohit. And is there a service that OCI provides to ease the software development process? Rohit: We have a managed low code service called APEX, several other developer services, and a managed Terraform service called Resource Manager. For analytics, we have a managed analytics service called Oracle Analytics Cloud that integrates with various third-party solutions. Under Application services, we have a managed serverless offering, call functions, and API gateway and an Events Service to help you create microservices and event driven architectures. 05:35 Rohit: We have a comprehensive connected SaaS suite across your entire business, finance, human resources, supply chain, manufacturing, advertising, sales, customer service, and marketing all running on OCI. That's a long list. And these seven categories and the services mentioned represent just a small fraction of more than 80 services currently available in OCI. Fortunately, it is quick and easy to try out a new service using our industry-leading Free Tier account. We are the first cloud to offer a server for just a penny per core hour. Whether you're starting with Oracle Cloud Infrastructure or migrating your entire data set into it, we can support you in your journey to the cloud. 06:28 Have an idea and want a platform to share your technical expertise? Head over to the new Oracle University Learning Community. Drive intellectual, free-flowing conversations with your peers. Listen to experts and learn new skills. If you are already an Oracle MyLearn user, go to MyLearn to join the Community. You will need to log in first. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. Join the conversation today! 07:04 Nikita: Welcome back! Now let's listen to Rohit explain the core constructs of OCI's physical architecture, starting with regions. Rohit: Region is a localized geographic area comprising of one or more availability domains. Availability domains are one or more fault tolerant data centers located within a region, but connected to each other by a low latency, high bandwidth network. Fault domains is a grouping of hardware and infrastructure within an availability domain to provide anti-affinity. So think about these as logical data centers. Today OCI has a massive geographic footprint around the world with multiple regions across the world. And we also have a multi-cloud partnership with Microsoft Azure. And we have a differentiated hybrid cloud offering called Dedicated Region Cloud@Customer. 08:02 Lois: But before we dive into the physical architecture, can you tell us…how does one actually choose a region? Rohit: Choosing a region, you choose a region closest to your users for lowest latency and highest performance. So that's a key criteria. The second key criteria is data residency and compliance requirements. Many countries have strict data residency requirements, and you have to comply to them. And so you choose a region based on these compliance requirements. 08:31 Rohit: The third key criteria is service availability. New cloud services are made available based on regional demand at times, regulatory compliance reasons, and resource availability, and several other factors. Keep these three criteria in mind when choosing a region. So let's look at each of these in a little bit more detail. Availability domain. Availability domains are isolated from each other, fault tolerant, and very unlikely to fail simultaneously. Because availability domains do not share physical infrastructure, such as power or cooling or the internal network, a failure that impacts one availability domain is unlikely to impact the availability of others. A particular region has three availability domains. One availability domain has some kind of an outage, is not available. But the other two availability domains are still up and running. 09:26 Rohit: We talked about fault domains a little bit earlier. What are fault domains? Think about each availability domain has three fault domains. So think about fault domains as logical data centers within availability domain. We have three availability domains, and each of them has three fault domains. So the idea is you put the resources in different fault domains, and they don't share a single point of hardware failure, like physical servers, physical rack, top of rack switches, a power distribution unit. You can get high availability by leveraging fault domains. We also leverage fault domains for our own services. So in any region, resources in at most one fault domain are being actively changed at any point in time. This means that availability problems caused by change procedures are isolated at the fault domain level. And moreover, you can control the placement of your compute or database instances to fault domain at instance launch time. So you can specify which fault domain you want to use. 10:29 Nikita: So then, what's the general guidance for OCI users? Rohit: The general guidance is we have these constructs, like fault domains and availability domains to help you avoid single points of failure. We do that on our own. So we make sure that the servers, the top of rack switch, all are redundant. So you don't have hardware failures or we try to minimize those hardware failures as much as possible. You need to do the same when you are designing your own architecture. So let's look at an example. You have a region. You have an availability domain. And as we said, one AD has three fault domains, so you see those fault domains here. 11:08 Rohit: So first thing you do is when you create an application you create this software-defined virtual network. And then let's say it's a very simple application. You have an application tier. You have a database tier. So first thing you could do is you could run multiple copies of your application. So you have an application tier which is replicated across fault domains. And then you have a database, which is also replicated across fault domains. 11:34 Lois: What's the benefit of this replication, Rohit? Rohit: Well, it gives you that extra layer of redundancy. So something happens to a fault domain, your application is still up and running. Now, to take it to the next step, you could replicate the same design in another availability domain. So you could have two copies of your application running. And you can have two copies of your database running. 11:57 Now, one thing which will come up is how do you make sure your data is synchronized between these copies? And so you could use various technologies like Oracle Data Guard to make sure that your primary and standby-- the data is kept in sync here. And so that-- you can design your application-- your architectures like these to avoid single points of failure. Even for regions where we have a single availability domain, you could still leverage fault domain construct to achieve high availability and avoid single points of failure. 12:31 Nikita: Thank you, Rohit, for taking us through OCI at a high level. Lois: For a more detailed explanation of OCI, please visit mylearn.oracle.com, create a profile if you don't already have one, and get started on our free training on OCI Foundations. Nikita: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Nikita Abraham... Lois: And Lois Houston, signing off! 12:57 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Welcome episode 227 of the Cloud Pod podcast - where the forecast is always cloudy! This week your hosts are Justin, Jonathan, Matthew and Ryan - and they're REALLY excited to tell you all about the 161 one things announced at Google Next. Literally, all the things. We're also saying farewell to EC2 Classic, Amazon SES, and Azure's Explicit Proxy - which probably isn't what you think it is. Titles we almost went with this week:
Jessie Healy Spent $20 Million on Digital ads, Here's What's Working In 2023Sit down with Jessie Healy, a digital advertising veteran with a focus on e-commerce. Jessie, whose agency oversees $20 million in digital advertising, discusses the subtle art of crafting an ad that resonates. She also offers a nuanced perspective on the role of algorithms, the pitfalls of over-optimization, and why a one-size-fits-all approach to advertising doesn't work. But she also brings something more: an understanding that behind every click, every metric, there's a human making a choice. It's a conversation that goes beyond tactics to explore what really makes consumers click, literally and metaphorically.Show LinksWebtopiaJessie on LinkedInJessie on TwitterSponsorsFree 30-day trial of Zipify OCU - To get an unadvertised gift, email help@zipify.com and ask for the "Tech Nasty Bonus".Elevar - Never miss another conversion with Elevar's server-side tracking. Enhance Klaviyo flow performance 2-3x. Plans from $0 a month + 15-day free trials on all plans.Theme Updater Plus: Vault - Effortlessly back up and quickly restore your Shopify StoreLoop Returns: Ecommerce Returns Management for Shopify - Brands using Loop retain over 50% of their revenue by promoting exchanges over refunds.Not on Shopify? Start your free trialNever miss an episodeSubscribe wherever you get your podcastsJoin Kurt's newsletterHelp the showAsk a question in The Unofficial Shopify Podcast Facebook GroupLeave a reviewSubscribe wherever you get your podcastsWhat's Kurt up to?See our recent work at EthercycleSubscribe to our YouTube ChannelApply to work with Kurt to grow your store.
ML アクセラレータ TPU の分散実行インフラについて向井が読みました。ご意見感想などは Reddit やおたより投書箱にお寄せください。iTunes のレビューや星もよろしくね。
In this episode of the Digital Supply Chain podcast, I had a fascinating chat with Alex Yaseen, founder and CEO of Parabola. Alex and his team are doing incredible work, empowering non-technical folks to automate complex, manual data processes within the supply chain sector.We delved into how Parabola allows operators to essentially 'drag and drop' tasks, making it super accessible and user-friendly, and even tackle tricky stuff like parsing handwriting from PDFs using AI - mind-blowing!Alex shared some compelling success stories too. From major freight forwarders like Flexport to 100-year old family businesses like NFI, Parabola is enabling these organizations to dramatically increase efficiency, minimize errors, and promote creative innovation.Importantly, we touched on the fear many operators have about automation taking their jobs. But as Alex rightly pointed out, using Parabola isn't about job loss; it's about evolving roles, improving job quality, and opening up possibilities for advancement.Parabola is indeed transforming the world of data in supply chains and it's been an absolute pleasure having Alex on the show. Tune in for some great insights and remember, the future of supply chains is digital and it's happening now!Don't forget to check out the video version of this episode at https://youtu.be/HOiMZfMaWp0Enjoy the episode!Support the showPodcast supportersI'd like to sincerely thank this podcast's generous supporters: Lorcan Sheehan Krishna Kumar Christophe Kottelat Olivier Brusle Robert Conway Alexandra Katopodis Alicia Farag Joël VANDI And remember you too can Support the Podcast - it is really easy and hugely important as it will enable me to continue to create more excellent Digital Supply Chain episodes like this one.CSCMP European Conference Registration pagePodcast Sponsorship Opportunities:If you/your organisation is interested in sponsoring this podcast - I have several options available. Let's talk!FinallyIf you have any comments/suggestions or questions for the podcast - feel free to just send me a direct message on Twitter/LinkedIn. If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover it. Thanks for listening....
We breakdown and merge some film review analysis with the Master Data Flow King of the league himself (and make a poor attempt to play his dumb trivia games) - but ultimately reach key conclusions about this years rookie prospects - especially the pass catchers. Must listen before your rookie draft!
Bruno Aziza – Head of Data and Analytics at Google Cloud – has a unique expert perspective on the data industry through his work with data executives and visionaries while managing a broad portfolio of products at Google including BigQuery, Dataflow, Dataproc, Looker, Data Studio, PubSub, Composer, Data Fusion, Catalog, Dataform, Dataprep and Dataplex. In this episode, Bruno we dive into topics such as the development (and disappointment) of Data Mesh, how it relates to Data Products, along with how data leaders can transform their organizations by first looking inward and transforming one's self.We also speak to why Chief Data Officers are navigating from 'reactive analytics' to data products that are driven by real-time. Follow Bruno Aziza on Linkedin, Twitter, and Medium.What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.
This week, we welcome Suchakra Sharma, Chief Scientist at Privado.ai, where he builds code analysis tools for data privacy & security. Previously, he earned his PhD in Computer Engineering from Polytechnique Montreal, where he worked on eBPF Technology and hardware-assisted tracing techniques for OS Analysis. In this conversation, we delve into Suchakra's background in shifting left for security and how he applies traditional, tested static analysis techniques — such as 'taint tracking' and 'data flow analysis' — for use on large code bases at scale to help fix privacy leaks right at the source.---------Thank you to our sponsor, Privado, the developer friendly privacy platform.---------Suchakra aligns himself with the philosophical aspects of privacy and wishes to work on anything that helps in limiting the erosion of privacy in modern society, since privacy is fundamental to all of us. These kinds of needs have always been here, and as societies have advanced, this is a time when we require more guarantees of privacy. After all, it is humans that are behind systems and it is humans that are going to be affected by the machines that we build. Check out this fascinating discussion on how to shift privacy left in your organization.Topics Covered:Why Suchakra was interested in privacy after focusing on static code analysis for securityWhat 'shift left' means and lessons learned from the 'shift security left' movement that can be applied to 'shift privacy left' effortsSociological perspectives on how humans developed a need for keeping things 'private' from othersHow to provide engineering-focused guarantees around privacy today & what the role should be of engineers within this 'shift privacy left' paradigmSuchakra's USENIX Enigma talk & discussion of 'taint tracking' & 'data flow analysis' techniquesWhich companies should build in-house tooling for static analysis, and which should be outsourcing to experienced vendors like PrivadoHow to address 'privacy bugs' in code; why it's important to have an 'auditor's mindset;' &, why we'll see 'Privacy Bug Bounty Programs' soonSuchakra's advice to engineering managers to move the needle on privacy in their orgsResources Mentioned:Join Privado's Slack CommunityReview Privado's Open Source Code Scanning ToolsGuest Info:Connect with Suchakra on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Jon and Ben discuss the highlights of the 1.65, 1.66, and 1.67 releases of Rust. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps & referenced resources [@01:11] - Rust 1.65 [@01:28] - Generic Associated Types More detailed blog post [@06:48] - let-else statements if_chain crate [@16:56] - break from labeled blocks [@19:21] - Splitting Linux debuginfo [@20:44] - Stabilized APIs std::backtrace::Backtrace [@22:41] - RLS deprecation [@23:19] - Changelog deep-dive [@23:30] - Cargo queue reordering Benchmarking results [@24:54] - Niches in data-filled enums [@27:23] - poll_fn and Unpin [@28:05] - Too many personalities [@29:20] - uninit integers are UB Working Group discussion [@33:23] - Uplift let_underscore lint [@35:13] - #[non_exhaustive] on enum variants [@36:27] - Rust 1.66.0 [@36:40] - Explicit discriminants on enums with fields Dark and forbidden secrets RFC [@40:05] - core::hint::black_box Tracking issue discussion [@46:34] - cargo remove [@46:52] - Stabilized APIs Mixed integer operations BTreeMap/Set first/last operations std::os::fd [@50:51] - Changelog deep-dive [@51:10] - Cargo publish changes [@53:33] - Don't link to libresolv or libiconv on Darwin [@54:41] - sym in asm [@55:18] - Soundness fix for impl Trait [@57:27] - Allow transmutes across lifetimes [@57:45] - Unicode 15 [@58:24] - for loops over Option and Result [@1:00:38] - Rust 1.66.1 Security advisory. Affects primarily users with insteadOf in their git config. Prefer pushInsteadOf instead. You may also be interested in: Rustup 1.25.2 [@1:02:41] - Rust 1.67 [@1:02:45] - #[must_use] on async fn [@1:04:07] - sync::mpsc updated Long-standing mpsc panic The PR crossbeam crate CachePadded AtomicCell [@1:07:52] - Stabilized APIs NonZero*::BITS [@1:08:38] - Changelog deep-dive [@1:08:45] - Ratio-aware decompression limit Original CVE Original fix [@1:10:40] - Ordering of array fields [@1:13:08] - Compilation targets Sony PlayStation 1 target Remove linuxkernel targets Target configuration x86_64-unknown-none [@1:14:45] - Dataflow-based MIR constant propagation [@1:15:37] - The drop order twist The effect on let-chains let-chains tracking issue [@1:20:48] - Inconsistent rounding of 0.5 [@1:23:24] - Android NDK update in 1.68 [@1:23:54] - Help test cargo's HTTP protocol
In this week's episode of the Business of Data podcast, host Catherine King talks with Gurpreet Muctor, Chief Architect & Data Officer, Smart Cities for Westminster City Council. Together they discuss the benefits of creating and maintaining a flowing data ecosystem. In the discussion this week: Strategic roadmaps to achieve goals Data capabilities surrounding internal requirements Public sector challenges of attracting talent Complex data sources Exploring opportunities for data, they won't all be winners
Highlights from this week's conversation include:Defining data flow (2:31)Are there limitations in timely data flow operation and/or building operators? (8:20)Areas of incremental computation that are having an impact today (17:10)Building a library vs building a product (24:06)Combining delight and empathy into a focus (27:52)Final thoughts and takeaways (32:42)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Today I'm chatting with Bruno Aziza, Head of Data & Analytics at Google Cloud. Bruno leads a team of outbound product managers in charge of BigQuery, Dataproc, Dataflow and Looker and we dive deep on what Bruno looks for in terms of skills for these leaders. Bruno describes the three patterns of operational alignment he's observed in data product management, as well as why he feels ownership and customer obsession are two of the most important qualities a good product manager can have. Bruno and I also dive into how to effectively abstract the core problem you're solving, as well as how to determine whether a problem might be solved in a better way. Highlights / Skip to: Bruno introduces himself and explains how he created his “CarCast” podcast (00:45) Bruno describes his role at Google, the product managers he leads, and the specific Google Cloud products in his portfolio (02:36) What Bruno feels are the most important attributes to look for in a good data product manager (03:59) Bruno details how a good product manager focuses on not only the core problem, but how the problem is currently solved and whether or not that's acceptable (07:20) What effective abstracting the problem looks like in Bruno's view and why he positions product management as a way to help users move forward in their career (12:38) Why Bruno sees extracting value from data as the number one pain point for data teams and their respective companies (17:55) Bruno gives his definition of a data product (21:42) The three patterns Bruno has observed of operational alignment when it comes to data product management (27:57) Bruno explains the best practices he's seen for cross-team goal setting and problem-framing (35:30) Quotes from Today's Episode “What's happening in the industry is really interesting. For people that are running data teams today and listening to us, the makeup of their teams is starting to look more like what we do [in] product management.” — Bruno Aziza (04:29) “The problem is the problem, so focus on the problem, decompose the problem, look at the frictions that are acceptable, look at the frictions that are not acceptable, and look at how by assembling a solution, you can make it most seamless for the individual to go out and get the job done.” – Bruno Aziza (11:28) “As a product manager, yes, we're in the business of software, but in fact, I think you're in the career management business. Your job is to make sure that whatever your customer's job is that you're making it so much easier that they, in fact, get so much more done, and by doing so they will get promoted, get the next job.” – Bruno Aziza (15:41) “I think that is the task of any technology company, of any product manager that's helping these technology companies: don't be building a product that's looking for a problem. Just start with the problem back and solution from that. Just make sure you understand the problem very well.” (19:52) “If you're a data product manager today, you look at your data estate and you ask yourself, ‘What am I building to save money? When am I building to make money?' If you can do both, that's absolutely awesome. And so, the data product is an asset that has been built repeatedly by a team and generates value out of data.” – Bruno Aziza (23:12) “[Machine learning is] hard because multiple teams have to work together, right? You got your business analyst over here, you've got your data scientists over there, they're not even the same team. And so, sometimes you're struggling with just the human aspect of it.” (30:30) “As a data leader, an IT leader, you got to think about those soft ways to accomplish the stuff that's binary, that's the hard [stuff], right? I always joke, the hard stuff is the soft stuff for people like us because we think about data, we think about logic, we think, ‘Okay if it makes sense, it will be implemented.' For most of us, getting stuff done is through people. And people are emotional, how can you express the feeling of achieving that goal in emotional value?” – Bruno Aziza (37:36) Links As referenced by Bruno, “Good Product Manager/Bad Product Manager”: https://a16z.com/2012/06/15/good-product-managerbad-product-manager/ LinkedIn: https://www.linkedin.com/in/brunoaziza/ Bruno's Medium Article on Competing Against Luck by Clayton M. Christensen: https://brunoaziza.medium.com/competing-against-luck-3daeee1c45d4 The Data CarCast on YouTube: https://www.youtube.com/playlist?list=PLRXGFo1urN648lrm8NOKXfrCHzvIHeYyw
Highlights from this week's conversation include:What is Materialize? (2:43)Frank and Arjun's journey in data and what led them to the idea of Materialize (6:22)The good and the bad of research in academia vs starting a company (25:20)The MVP for databases (33:49)Materialize's end-to-end benefit for the user experience (43:03)Interchanging Materialize in warehouse and cloud data usage (48:25)The trade-offs within Materialize (1:00:02) Final takeaways and previewing part two of the conversation (1:09:25)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com..
'Visual Programming' refers a style of programming that allows the user to specify a programs in a two-(or more)-dimensional fashion. Visual programming environments represent the data, control flow, or program state in a graphical way, allowing them to be directly manipulated. It has been a hot area of research from the very beginning of personal computing, to today.This week we will cover a few major visual programming environments, why visual programming has remained compelling over the decades, and whether there is untapped potential for VP today.Chapters:[00:00:00] Intros[00:03:50] What is Visual Programming?[00:05:42] Origins[00:14:34] Block-based Visual Programming[00:20:26] Wire and Dataflow-based Visual Programming[00:31:51] An Umbrella Term[00:36:31] Conceptual History[00:48:23] The Duality of Direct Manipulation[00:58:40] Direct Manipulation of Running State[01:11:25] Programming by Example[01:21:17] Fill in the Details for Me[01:28:49] Strengths of Visual Programming[01:43:36] Leveraging the Visual Cortex[01:50:58] Second Order EffectsLinks/Resources:SketchPad demo: https://www.youtube.com/watch?v=2Cq8S3jzJiQPygmilion Paper: http://worrydream.com/refs/Smith%20-%20Pygmalion.pdfDemo:: https://youtu.be/xNW8wUpbqQM?t=319GrailDemo: https://www.youtube.com/watch?v=2Cq8S3jzJiQHypercardDemo: https://www.youtube.com/watch?v=2Cq8S3jzJiQViewpoint https://scottkim.com/2020/06/07/viewpoint/Scratchhttps://www.bryanbraun.com/2022/07/16/scratch-is-a-big-deal/Labview (imperative control flow): https://www.ni.com/en-us/shop/labview.htmlUnreal Engine Blueprint (functional)https://docs.unrealengine.com/5.0/en-US/blueprints-visual-scripting-in-unreal-engine/https://blueprintsfromhell.tumblr.com/Max/MSP for musicianshttps://cycling74.com/products/maxOthershttps://cables.gl/https://nodes.io/===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/ SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7 APPLE PODCASTS: https://podcasts.apple.com/us/podcast/the-technium/id1608747545
Stephanie Wong talks with guests Shachar Guz, Inna Weiner, and Gabe Weiss about Google's Database Migration Service and how it helps companies move data to Google Cloud. What typically is a complicated process, DMS simplifies everything from planning to security to validating database migrations. DMS has undergone some changes since last we spoke with Shachar and Gabe. It's gone GA and helped thousands of customers benefit from the service. Migrations are possible from any PostgreSQL database source to AlloyDB for PostgreSQL, which is designed to support HTAP data (transactional and analytical). One of the most exciting updates is the introduction of the DMS modernization journey, which allows customers to change database type during migration (heterogenous). In addition, migrations with DMS can be set up to continuously replicate data between the old and new database. With this feature, developers can compare the application performance against the old vs. new database. Inna talks about the benefits of keeping your data in the cloud, like secure, reliable, and scalable data storage. Google Cloud takes care of the maintenance work for you as well. DMS takes security seriously and supports multiple security methods to keep your data safe as it migrates. We talk about the different customers using DMS and how the process works for homogeneous and heterogeneous migrations. Before you even start, Gabe tells us, DMS helps you prepare for the migration. And tools like Dataflow can help when customers decide full migration would be too difficult. We talk about the difference between Datastream and DMS and use cases for each. We wrap up the show with a look at the future of DMS. Shachar Guz Shachar is a product manager at Google Cloud, he works on the Cloud Database Migration Service. Shachar worked in various product and engineering roles and shares a true passion about data and helping customers get the most out of their data. Shachar is passionate about building products that make cumbersome processes simple and straightforward and helping companies adopt Cloud technologies to accelerate their business. Inna Weiner Inna is a senior technical leader with 20+ years of global experience. She is a big data expert, specializing in deriving insights from data, product and user analytics. Currently, she leads engineering for Cloud DMS. Inna enjoys building diverse engineering organizations, with common vision, growth strategy and inclusive culture. Gabe Weiss Gabe leads the database advocacy team for the Google Cloud Platform team ensuring that developers can make awesome things, both inside and outside of Google. That could mean speaking at conferences, writing example code, running bootcamps, writing technical blogs or just doing some hand holding. Prior to Google he's worked in virtual reality production and distribution, source control, the games industry and professional acting. Cool things of the week Flexible committed use discounts — a simple new way to discount Compute Engine instances blog Understanding transactional locking in Cloud Spanner blog Interactive In-console Tutorial site Interview Database Migration Service site GCP Podcast Episode 262: Database Migration Service with Shachar Guz and Gabe Weiss podcast AlloyDB for PostgreSQL site PostgreSQL site Datastream site Dataflow site CloudSQL site Spanner site What's something cool you're working on? Gabe has been tinkering with new Google Cloud databases and managing a new team. Hosts Stephanie Wong
In this episode of Scaling Postgres, we discuss using rust for Postgres extensions, performance comparisons of TimescaleDB vs. Postgres, uninterrupted writes when sharding in Citus and the Postgres data flow. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://postgresml.org/blog/postgresml-is-moving-to-rust-for-our-2.0-release/ https://www.timescale.com/blog/postgresql-timescaledb-1000x-faster-queries-90-data-compression-and-much-more/ https://www.citusdata.com/blog/2022/09/19/citus-11-1-shards-postgres-tables-without-interruption/ https://www.crunchydata.com/blog/postgres-data-flow https://www.cybertec-postgresql.com/en/postgresql-sequences-vs-invoice-numbers/ https://postgrespro.com/blog/pgsql/5969741 https://www.crunchydata.com/blog/fun-with-postgres-functions https://www.depesz.com/2022/09/18/what-is-lateral-what-is-it-for-and-how-can-one-use-it/ https://www.postgresql.fastware.com/blog/column-lists-in-logical-replication-publications https://blog.dalibo.com/2022/09/19/psycopg-pipeline-mode.html https://pganalyze.com/blog/5mins-postgres-python-psycopg-3-1-pipeline-mode-query-performance https://ideia.me/using-the-timescale-gem-with-ruby https://b-peng.blogspot.com/2022/09/configuring-vip-route-table.html https://postgres.fm/episodes/index-maintenance https://postgresql.life/post/peter_smith/ https://www.rubberduckdevshow.com/episodes/59-rails-postgres-scaling-with-andrew-atkinson/
Host Stephanie Wong chats with storage pros Sean Derrington and Nishant Kohli this week to learn more about cost optimization with storage projects and exciting new launches in the Google Cloud storage space! To start, we talk about the Storage Spotlight of years past and the cool Google Cloud products that Google is unveiling this year. Optimization is a huge theme this year, with a focus not only on cost optimization but also performance and resource use as well. Enterprise readiness and storage everywhere, Sean tells us, are the most important pillars as Google continues to improve offerings. We learn about Hyperdisk and the three customizable attributes users can control and the benefits of Filestore Enterprise for GKE for large client systems. Nishant talks about Cloud Storage and how clients are using it at scale for their huge data projects. Specifically, Google Storage has been working to help clients with large-scale data storage needs to optimize costs with Autoclass. Storage Insights is another new tool launching late this year or early next year that empowers better decision-making through increased knowledge and analytics of storage usage. GKE storage is getting a revamp as well with Backup for GKE to help clients recover applications and data easily. Google Cloud for Backup and DR helps keep projects secure as well. This managed service is easy to use and integrate into all cloud projects and can be used with on prem projects and then backed up into the cloud. This is ideal for clients as they shift to cloud or hybrid systems. Companies like Redivis take advantage of some of these new data features, and Nishant talks more about how Autoclass and other tools have helped them save money and improve their business. Sean Derrington Sean is the Group Product Manager for the storage team. He is a long time storage industry PM veteran; he's worked on Veritas, Symantec, Exablox (storage startup). Nishant Kohli Nishant has a decade plus of Object Storage experience at Dell/EMC and Hitachi. He's currently Senior Product Manager on the storage team. Cool things of the week Cloud Next 2022 site Integrating ML models into production pipelines with Dataflow blog Four non-traditional paths to a cloud career (and how to navigate them) blog Interview What's New & Next: A Spotlight on Storage site Google Cloud Online Storage Products site GCP Podcast Episode 277: Storage Launches with Brian Schwarz and Sean Derrington podcast GKE site Filestore site Filestore Enterprise site Filestore Enterprise for fully managed, fault tolerant persistent storage on GKE blog Cloud Storage site Cloud Storage Autoclass docs GCP Episode 307: FinOps with Joe Daly podcast Storage Insights docs GCP Podcast Episode 318: GKE Turns 7 with Tim Hockin podcast Backup for GKE docs Backup and DR Service site Redivis site What's something cool you're working on? Stephanie is working on new video content and two Next sessions: one teaching how to simplify and secure your network for all workloads and one talking about how our infrastructure partner ecosystem helps customers. Hosts Stephanie Wong
On The Cloud Pod this week, the team gets skeptical on Prime Day numbers. Plus: AWS re:Inforce brings GuardDuty, Detective and Identity Center updates and announcements; Google Cloud says hola to Mexico with a new Latin American region; and Azure introduces its new cost API for EC and MCA customers. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
Nesse episódio com os dois maiores especialistas do Brasil sobre esse assunto, Thiago Santiago e Gustavo Gattass, falamos sobre a nova plataforma de dados da Cloudera, como sempre trazendo inovação no mercado de Big Data e Analytics. Doug Cutting, criador do famoso sistema Apache Hadoop fez com que tudo fosse possível em 2006 para processamento de dados massivo e agora, a nova plataforma da Cloudera unificada CDP, traz os seguintes grandes benefícios para seus consumidores:Nuvem HíbridaCloudera SDX para Plataforma de Deployment Unificada com KubernetesEngenharia e Ciência de Dados como Produto de Entrega UnificadaData Warehouse e Visualização de DadosEntenda o futuro da Engenharia e Ciência de Dados em uma plataforma aonde se tem como principal objetivo a entrega de uma solução completa fim a fim, embarque no Cloudera CDP.Thiago Santiago = https://www.linkedin.com/in/thiagosantiago/ Gustavo Gattas = https://www.linkedin.com/in/ggattass/ No YouTube possuímos um canal de Engenharia de Dados com os tópicos mais importantes dessa área e com lives todas as quartas-feiras.https://www.youtube.com/channel/UCnErAicaumKqIo4sanLo7vQ Quer ficar por dentro dessa área com posts e updates semanais, então acesse o LinkedIN para não perder nenhuma notícia.https://www.linkedin.com/in/luanmoreno/ Disponível no Spotify e na Apple Podcasthttps://open.spotify.com/show/5n9mOmAcjra9KbhKYpOMqYhttps://podcasts.apple.com/br/podcast/engenharia-de-dados-cast/ Luan Moreno = https://www.linkedin.com/in/luanmoreno/
Eric Anderson (@ericmander) reunites with old colleagues Kenn Knowles (@KennKnowles) and Pablo Estrada (@polecitoem) for a conversation on Apache Beam, the open-source programming model for data processing. The trio once worked together at Google, and Beam was a turning point in the history of open-source there. Today, both Kenn and Pablo are members of the Beam PMC, and join the show with the inside scoop on Beam's past, present and future. In this episode we discuss: Transitioning Beam to the Apache Way How “inner source” works at Google Thoughts on the relationship between batch processing and streaming Some ways that community “power users” have contributed to Beam Information on Beam Summit 2022, the first onsite summit since COVID began The first few people to register can use code BEAM_POD_INV for a discount on tickets! Links: Apache Beam Apache Spark Apache Flink Apache Nemo Apache Samza Apache Crunch MapReduce paper MillWheel paper FlumeJava paper Dataflow paper Beam Summit 2022 Website Other episodes: TensorFlow with Rajat Monga
Who has access to data and what can be done with that data? Shannon Ellis, Ph.D., shares how she is training undergraduate students to be effective, inclusive and ethical data scientists. She discusses how data can be used, the limits of data science, and the barriers and biases that may shape data sets and potential conclusions. Series: "Data Science Channel" [Science] [Education] [Show ID: 37838]
Who has access to data and what can be done with that data? Shannon Ellis, Ph.D., shares how she is training undergraduate students to be effective, inclusive and ethical data scientists. She discusses how data can be used, the limits of data science, and the barriers and biases that may shape data sets and potential conclusions. Series: "Data Science Channel" [Science] [Education] [Show ID: 37838]
Who has access to data and what can be done with that data? Shannon Ellis, Ph.D., shares how she is training undergraduate students to be effective, inclusive and ethical data scientists. She discusses how data can be used, the limits of data science, and the barriers and biases that may shape data sets and potential conclusions. Series: "Data Science Channel" [Science] [Education] [Show ID: 37838]
Jeremiah Lowin is co-founder and CEO of Prefect, the company behind the popular open source data workflow orchestration system with the same name. We discussed the major design changes in Prefect 2.0, their move towards treating “code as workflows”, data engineering challenges facing data and ML teams today, and implications of looming trends in machine learning and AI.Download the FREE Report: State of Workflow Orchestration → https://www.prefect.io/lp/gradientflow?utm_source=gradientflow&utm_medium=newsletterSubscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.Detailed show notes can be found on The Data Exchange web site.
In this episode I talk about International Laws for Cyber Crime, Data Breaches, U.S. Data Privacy Laws, EU-GDPR, OECD Guidelines, Import & Export Controls, Transborder Data Flow and PCI-DSS. If you like this episode do share it with your buddies and also feel free to reach out to me with your suggestions, comments and queries. https://linkedin.com/in/tanayshandilya --- Send in a voice message: https://anchor.fm/tanayshandilya/message Support this podcast: https://anchor.fm/tanayshandilya/support
Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/nimbuspwn-a-clfs-vulnerability-and-dataflow.html A few vulnerabilities from a TOCTOU to an arbitrary free, and some research into using data-flow in your fuzzing. [00:00:18] Spot the Vuln - Where's it At? [00:03:44] Nimbuspwn - A Linux Elevation of Privilege [00:08:38] Windows Common Log File System (CLFS) Logical-Error Vulnerability [CVE-2022-24521] [00:15:32] Arbitrary Free in Accusoft ImageGear ioca_mys_rgb_allocate [00:25:31] Commit Level Vulnerability Dataset [00:28:44] DatAFLow - Towards a Data-Flow-Guided Fuzzer The DAY[0] Podcast episodes are streamed live on Twitch (@dayzerosec) twice a week: Mondays at 3:00pm Eastern (Boston) we focus on web and more bug bounty style vulnerabilities Tuesdays at 7:00pm Eastern (Boston) we focus on lower-level vulnerabilities and exploits. The Video archive can be found on our Youtube channel: https://www.youtube.com/c/dayzerosec You can also join our discord: https://discord.gg/daTxTK9 Or follow us on Twitter (@dayzerosec) to know when new releases are coming.
Hi, Spring fans! Welcome to another installment of A Bootiful Podcast! In this installment, Josh Long (@starbuxman) talks to Spring Cloud luminary and all-around lovable guy Glenn Renfro (@cppwfs) about batch processing, tasks, messaging, integration, data flow, and a million other things. Also: t-shirts!
On the podcast this week, your hosts Stephanie Wong and Mark Mirchandani talk about the data processing tool Apache Beam with guests Pablo Estrada and Kenneth Knowles. Kenn starts us off with an overview of how Apache Beam began and how Cloud Dataflow was involved. The unique batch and stream method and emphasis on correctness garnered support from developers early on and continues to attract users. Pablo helps us understand why Beam is a better option for certain projects looking to process large amounts of data. Our guests describe how Beam may be a better fit than microservices that could become obsolete as company needs change. Next, we step back and take a look at why batch and stream is the gold standard of data processing because of its balance between low latency and ease of “being done” with data collection. Beam's focus on the correctness of data and correctness in processing that data is a core component. With good data, processing becomes easier, more reliable, and cheaper. Kenn gives examples of how things can go wrong with bad data processing. Beam strives for the perfect combination of low latency, correct data, and affordability. Users can choose where to run Beam pipelines, from other Apache software offerings to Dataflow, which means excellent flexibility. Our guests talk about the pros and cons of some of these options and we hear examples of how companies are using Beam along with supporting software to solve data processing challenges. To get started with Beam, check out Beam College or attend Beam Summit 2022. Kenneth Knowles Kenn Knowles is chair of the Apache Beam Project Management Committee. Kenn has been working on Google Cloud Dataflow—Google's Beam backend—since 2014. Kenn holds a PhD in programming languages from the University of California, Santa Cruz. Pablo Estrada Pablo is a Software Engineer at Google, and a management committee member for Apache Beam. Pablo is big into working on an open source project, and has worked all across the Apache Beam stack. Cool things of the week Under the sea: Building the world's fiber optic internet video Discovering Data Centers videos Google Data Cloud Summit site It's official—Google Distributed Cloud Edge is generally available blog GCP Podcast Episode 228: Fastly with Tyler McMullen podcast Save big by temporarily suspending unneeded Compute Engine VMs—now GA blog Interview Apache Beam site Apache Beam Documentation site Dataflow site Apache Flink site Apache Spark site Apache Samza site Apache Nemo site Spanner site BigQuery site Beam College site Beam College on Github site Beam Developer Mailing List email Beam User Mailing List email Beam Summit site What's something cool you're working on? Mark is working on a new Apache Beam video series Getting Started Wtih Apache Beam Hosts Stephanie Wong and Mark Mirchandani
Our accountant expert financial connoisseur of numbers breaks down how the data flows from field to scouting report and what key indicators you should be looking at when evaluating a prospect.
Мы все привыкли к разного рода анализаторам, но, как и любая "магия", их реализация таит за собой много тайн. Про это и не только мы поговорим с Head of DevRel компании PVS-Studio Сергеем Васильевым. И у нас есть специальный промокод на PVS Studio: dotnet_podcast Мы часто экспериментируем и нам очень важно Ваше мнение. Поделитесь им с нами в опросе: https://forms.gle/rScV3Wy6EmUHmhAAA Спасибо всем кто нас слушает. Не стесняйтесь оставлять обратную связь и предлагать свои темы. Shownotes: 0:08:45 Чем отличается "синтаксический", "статический" и "статистический" 0:13:00 Про Roslyn 0:21:45 AST для чайников 0:33:10 Анализаторы для всех 0:37:40 Отладка и боль 0:48:40 Roslyn и перфоманс 0:55:30 Data-Flow анализ 1:02:00 Аннотирование методов 1:16:15 Taint анализ 1:40:00 Байки из склепа 2:19:00 Security (SAST) 2:37:00 Что делать с 100500 warnings? 2:47:00 Как убедить начальство купить PVS Studio Ссылки: - https://pvs-studio.com/dotnet_pvs : PVS-Studio - https://bit.ly/3Ba1tLt : PVS-Studio YouTube - https://devblogs.microsoft.com/dotnet/how-to-write-a-roslyn-analyzer/ : How to write a Roslyn Analyzer - https://www.jetbrains.com/help/resharper/Code_Inspection__Creating_Custom_Inspections_and_QuickFixes.html : R# Create custom code inspections and quick-fixes - https://pvs-studio.com/ru/blog/posts/csharp/0399/ : Введение в Roslyn. Использование для разработки инструментов статического анализа - https://pvs-studio.com/ru/blog/posts/csharp/0867/ : Создание статического анализатора для C# на основе Roslyn API - https://pvs-studio.com/ru/blog/posts/0908/ : Технологии статического анализа кода PVS-Studio - https://pvs-studio.com/ru/blog/posts/csharp/0831/ : Про taint-анализ - https://pvs-studio.com/ru/blog/posts/csharp/0918/ : Про XXE - https://pvs-studio.com/ru/blog/posts/csharp/0876/ : Про SCA - https://owasp.org/www-project-top-ten/ : OWASP Top 10 2021 - https://cwe.mitre.org/top25/archive/2021/2021_cwe_top25.html : 2021 CWE Top 25 Most Dangerous Software Weaknesses - https://pvs-studio.com/ru/blog/posts/0606/ : Как убедить начальство купить PVS Studio Видео: https://www.youtube.com/watch?v=mYrLCCgoc-E Слушайте все выпуски: https://anchor.fm/dotnetmore YouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5 Обсуждайте: - VK: https://vk.com/dotnetmore - Telegram: https://t.me/dotnetmore_chat Следите за новостями: – Twitter: https://twitter.com/dotnetmore – Telegram channel: https://t.me/dotnetmore Copyright: https://creativecommons.org/licenses/by-sa/4.0/
Jeremiah Lowin, CEO & Founder, Prefect Jeremiah is the founder of Prefect, the data flow company that sits on top of the open-source workflow project Prefect Core as well as the recently released orchestration engine project Prefect Orion. Unlike many open-source companies, Prefect didn't start out as open-source. Two years into the company-building journey, Prefect Core launched and the traction from there has been very strong. Today, Prefect has a Slack community of over 10K members and has raised from VCs including Patrick O'Shaughnessy, Tiger Global, and Positive Sum. In this episode, we discuss the early Prefect story, open-source as a social network, different ways to approach fundraising as an open-source company, product positioning, and more!
In this episode, our managing editor Jenn Webb and I speak with Chris White, CTO of Prefect, a startup building tools to help companies build, monitor, and manage dataflows. Prefect originated from lessons Chris and his co-founder learned while they were at Capital One, where they were early users and contributors to related projects like Apache Airflow.Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.Detailed show notes can be found on The Data Exchange web site.Subscribe to The Gradient Flow Newsletter.
Wee Hyong Tok joins Scott Hanselman to provide an Azure Data Factory update with several members of the Azure Data Factory engineering team.[0:00:22]– Intro[0:02:48]– Using Azure Data Factory and Azure Purview for data governance and data integration (Linda Wang)[0:16:37]– Fast startup time to data flow compute clusters (Mark Kromer)[0:31:46]– CI/CD improvements with Azure Data Factory (Abhishek Narain)[0:47:08]– Wrap-upAzure Data FactoryPush Data Factory lineage data to Azure Purview (Preview)How to startup your data flows execution in less than 5 seconds! (Public Preview)Automated publishing for continuous integration and deliveryData integration at scale with Azure Data Factory or Azure Synapse PipelineCreate a free account (Azure)
Wee Hyong Tok joins Scott Hanselman to provide an Azure Data Factory update with several members of the Azure Data Factory engineering team.[0:00:22]– Intro[0:02:48]– Using Azure Data Factory and Azure Purview for data governance and data integration (Linda Wang)[0:16:37]– Fast startup time to data flow compute clusters (Mark Kromer)[0:31:46]– CI/CD improvements with Azure Data Factory (Abhishek Narain)[0:47:08]– Wrap-upAzure Data FactoryPush Data Factory lineage data to Azure Purview (Preview)How to startup your data flows execution in less than 5 seconds! (Public Preview)Automated publishing for continuous integration and deliveryData integration at scale with Azure Data Factory or Azure Synapse PipelineCreate a free account (Azure)
Featured interview: Prospects for a free data flow from the EU to South Korea following the EU's conclusion of Seoul's privacy protection complying with its General Data Protection Regulation (GDPR) -한국의 유럽연합 개인정보보호법(GDPR) 통과 시 기대되는 효과 Guest: Professor Park Nohyoung, Korea University Law School
Episode 16 of the podcast covers actionable advice for companies around risk management especially third-party due diligence, privacy, network security, and vulnerabilities. (00:08) Intro (00:55) Question 1: What advice would you give to companies in how to think about cyber risk - whether through data management, compliance or processes themselves? Can and should this risk even be quantified? (05:15) Question 2: M&A issues around data mapping: How should companies think about integration issues when acquiring legacy data of a target company? For example, should companies ask for reps and warranties around that data? Can companies be advised to properly determine indemnification amounts? (12:02) Question 3: How should organizations be building teams to work through these issues - whether internal or external? What are the characteristics of teams and vendors that are effectively addressing cyber risk on behalf of organizations? What do you look for in a vendor for clients? (19:14) Question 4: We think about regulations including CCPA, GDPR, and requirements pushed down through various industries - how should companies be thinking about compliance? How do third parties play into this? (22:40) Question 5: How are you seeing the work from home movement affecting cyber risk and how should companies be evaluating their security stances and policies in light of the remote workforce reality we find ourselves in?