POPULARITY
Join Lois Houston and Nikita Abraham, along with Senior Principal Database & Security Instructor Ron Soltani, as they discuss how the new Automatic SQL Plan Management feature in Oracle Database 23ai improves performance consistency and simplifies management. Then, Senior Principal Database & MySQL Instructor Bill Millar shares insights into two new features: one that enhances SecureFiles LOB Write Performance, improving read and write speeds, and another that increases the column limit in a table to 4,096, making it easier to handle complex data. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and joining me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week, we looked at the Oracle Database 23ai enhancements that have been made to Hybrid Columnar Compression and Fast Ingest. In today's episode, we'll talk about the 23ai new feature for Automatic SQL Plan Management with Ron Soltani, a Senior Principal Database & Security Instructor with Oracle University. 01:01 Nikita: And later on, we'll be joined by Bill Millar, another Senior Principal Database & MySQL Instructor, who will tell us about the 23ai automatic feature that enhances SecureFiles LOB Write Performance. We'll also get him to talk about the Wide Columns update. So, let's get started. Hi Ron! What have been the common challenges with SQL plans and database performance? Ron: One of the problems that we have always had, if you remember, was when data changes, database setting configuration, parameter changes, SQL that were operating very well could now behave badly using the SQL plan that were associated to them. And remember, the same SQL plan generally Oracle likes to continuously reuse. So the SQL plans were put in the baseline in the past, and we could have those SQL plan baseline, which are a set of approved plans to be used for a SQL from the SQL history stored in AWR, then could be used for the optimizer to choose from. However, which plan to choose and which one would be the best one to use, this is what the problem has been in managing the SQL plan baselines, and a lot of the operation would have been done manually. 02:22 Lois: And what have we done to overcome this? Ron: So now this new system will going to perform all of those operations automatically for us. Now it can search the Automatic Workload Repository. It can find SQL plans for a particular SQL statement, then look for any alternative plans that may available in alternate sources like SQL tuning sets. And then validate those plans and see if those plans are going to be good and to be used as SQL plan baseline for executing SQL statement by the optimizer. 03:00 Nikita: So we now have the Automatic SQL Plan Management Evolve Advisor to help manage operations automatically, right? Can you tell us a little more about it? How does it ensure optimal performance? Ron: This is an automatic advisor that is created that can go look for different plans and validate the plans by examining them, making sure that they are not causing any regression compared to the previous operation, and then evolve that plan into a good baseline. This simplifies management of the baseline repository for a SQL statement. So as data changes, as parameters changes, optimizer could come up with different type of plans that are set within this baseline that has been validated to be good baseline for each situational operation. So this way you reduce a lot of hard parsing operations. 04:00 Lois: And how does the SQL Evolve Advisor work, Ron? Ron: First, it will check the AWR to find what are the top SQLs that has been found. Then it will look to see if these top SQLs who did not perform well with the plan that they have, that's why they're top SQL, have other alternative plans that are stored in the SQL plan history, in AWR, or available in any other sources. Then if it finds any additional plans, it will go ahead and add all of those plans into the plan history. So in the plan history, now you have accumulation of all the plans available in AWR and anything that has been brought from other sources. Then it will test every one of those plans and validate that by use of the plan, the SQL statement will not deprivate and get slower. The performance is either similar or actually better. So normally, there is a percentage that the SQL should improve. So we will then validate these baselines. And finally, once the baselines or those plans have been validated, they will be accepted, and then they will be added as SQL plan baselines. They will remain in the statement history, in the AWR, and will be available for optimizer for the future use. 05:28 Nikita: What are the benefits of this? Ron: Number one is Autonomous Database. As you know, they want to automate all management, including management of the SQL execution due to changes that are happening for the application, for the data, or the database and its environment. It totally eliminates any manual intervention for management of the statement, and it can transparently repair any statement that had been affected by a major change. 06:00 Lois: What sort of problems does this feature solve for us? Ron: Of course, this is a performance consistency. We want to make sure that every statement performed to its best performance and any specific changes that may impact those SQL statements would be taken into an account, and a better plan, if available, would then be available for use. It also improves the application performance level, therefore database service level will get much improvement. And the SQL execution plans will be automatically managed behind the scene by expanding these baselines, by managing all of these baseline history and all of that that is managed by this automatic SQL plan management environment automatically. 06:50 Nikita: And when do we use this? Ron: If there is a change in a database environment, like you add SGA, the change into the shared pool, change in the size of the buffer cache or any type of storage effects. So all of those can actually affect the SQL execution. Now all of those changes, including data changes, can cause a SQL plan to not behave very well or behave as well as it was doing before. Therefore, if particular plans do not perform as well as they did before, that affects the performance of the application. This also affects the performance of the database and the instance. 07:35 Lois: So, how do we use this environment? Ron: Well, best news that I have for you in that is that there is nothing manual needs to be done. All we need to do is, number one, make sure that we enable foreground automatic SQL plan management that we done through the package for the DBMS SPM for SQL plan management. You will use the package with the configure option, and you enable the auto SPM evolve task, and you set it to auto. Once this is done, now the SQL evolve plan management and advisor are enabled, and they will then monitor your statements, review all of the top SQLs as they are found with all of the ADDM operation, and then do their work in looking for better plans and being able to maintain the SQL plan baselines we talked about. Now for you to be able to view, monitor, and see how these operations are going, if it is enabled, you can take a look at the DBA SQL plan baseline's view. There are many, many columns in that particular baseline, and there are also columns that has been added that tell you where is the plan generated from, if a plan is approved, and any other user interaction with the plan or settings can then be verified using that DBA SQL plan baseline view. 09:13 Are you looking for practical use cases to help you plan and apply configurations that solve real-world challenges? With the new Applied Learning courses for Cloud Applications, you'll be able to practically apply the concepts learned in our implementation courses and work through case studies featuring key decisions and configurations encountered during a typical Oracle Cloud Applications implementation. Applied learning scenarios are currently available for General Ledger, Payables, Receivables, Accounting Hub, Global Human Resources, Talent Management, Inventory, and Procurement, with many more to come! Visit mylearn.oracle.com to get started. 09:54 Nikita: Welcome back! Let's bring Bill into the conversation. Hi Bill! Can you tell us about the 23ai automatic feature that enhances SecureFiles LOB Write Performance? Bill: The key here is that it is automatic and transparent. There's no parameters set. Nothing to configure in table, no hints, and nothing that you have to do with these improvements. It is tightly integrated with SecureFiles LOB infrastructure. So now, multiple LOBs can be handled in a single transaction and can be buffered simultaneously. This will help with mixed workloads, switching between the LOBs that are writing in a single transaction. The PGA will adaptively resize based off the size for these large writes for the LOBs if you're using the No Cache option. Remember, no cache is going to bypass the buffer cache and does direct reads and writes from the PGA. JSON type will be transformed into the OSON Oracle data type. It is an optimized native binary storage format for JSON data. 11:15 Lois: Ok. So, going forward, there will be better read and write performance for LOBs. Bill: Multiple LOBs in a single transaction can be buffered simultaneously, improving mixed workloads. We just talked about the PGA. Automatically, the buffer is automatically resized. And the improved JSON support. The reason it will recognize, hey, this is a JSON data type. But traditionally, JSON data types were small. So they were small to medium size. So the range from 32k to 32 meg was considered small to medium whereas LOBs were designed for data types larger than 100 meg. So by recognizing this a JSON data type, it can take advantage of the LOB architecture. Other enhancements will also include the acceleration of compressed LOBs, the pen and compression caching, and improves the poor performance of your reads and writes to compressed LOBs. It's faster than previously. 12:24 Nikita: Bill, what do you think about the recent increase in the column limit? Previously, the limit was 1000 columns per table, which sometimes posed issues when migrating from other systems that allowed more than 1,000 columns, right? Bill: Maybe because of workload requirements, the whole machine learning, the internet of things workloads, IOTs can have hundreds of thousands of attributes, dimensional attribute columns for that. And even our very own blockchain tables reserves up to 40 hidden virtual columns, so that takes away from the total amount. Virtual columns count towards the column limits and some applications as they drop columns, what it does, it just converts them to unused, and it still applies towards the limit the number of columns that you can have to that limit. There were workarounds. However, they were most likely not the best way to do it, like column switching, table splitting for that. But big data really use cases, really saw where files have or required more than 1,000 columns. 13:42 Lois: So, now that we can have 4,096 columns in a table, I'm sure it's made handling complex data a lot easier. Bill: So by increasing this, since other systems do support higher column limits, it can-- the increase can make migration from other systems easier and possibly even a little bit more attractive while it can make applications a little bit simpler because the 1,000 column limit was not always optimal for analytics. Where 1,000 might have been plenty for OLTP type environments, but not for the analytics, especially when it comes to machine learning and those internet of things that we talked about, where the previous workarounds, like splitting the tables, really caused more performance issue than anything else. So we want to avoid those suboptimal workarounds. And the nice thing is there's no change to the SQL. So once you have that-- well, if we were doing SQL, if we had tables that were split and we're trying to do things that is actually going to help improve that SQL, now, we don't have multiple objects that we're dealing with. 14:57 Nikita: How do we actually go about increasing the column limit to 4,096? Bill: You do have to have the compatibility set to 23c. Why? Because it's a new feature. There is a new initialization parameter called Max columns, and you do set that. There's two different ways, two different values. We can set it to standard or we can set it to extended. It is dynamic. When it's set to standard, it's only 1,000. When we set it to extended, it's going to allow the 4,096. It is modifiable at the PDB level. However, it will inherit what's at the root level, if it's not explicitly set at a PDB. It can't alter it in a session for that. And multiple instances of the RAC environment must use the same value. Now one thing, notice that it cannot be set to standard if I created a table that had more than 1,000 columns. One thing that might get you, when you drop a table that has more 1,000 columns and you try to set it back to standard, it might tell you, hey, you have tables that have more than 1,000 columns. Don't forget your recycle bin unless you did a drop table purge. 16:09 Lois: Are there any performance considerations to keep in mind, Bill? Bill: There's really no DML or query performance degradation for the tables. However, it might require, as you would expect, the increase in memory when we have the new column limits. It might require additional shared pool, additional SGA with the additional columns, more buffer cache as we're bringing blocks in. So that's shared pool along with the PGA. And also we can add in buffer cache in there, because that increased column count is going to be increase in the total PGA memory usage. And those are kind of expected for that. But the big advantage is it gives us the ability to eliminate some of these suboptimal workarounds that we had in the past. 17:02 Nikita: Ok! We covered a lot today so thank you Bill and Ron. Lois: To learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 17:27 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Join Jame and Callum on this episode about the ultimate 4 rings of power: Tetracyclines. The swiss army knife of antibiotics. We talk through the history, uses, mechanism, PK/PD and resistance.Become a pharmacophil as you learn something about pharmacophores.Shows notes for this episode hereSend us a text Support the Show.Questions, comments, suggestions to idiotspodcasting@gmail.com or on X/Threads @IDiots_podPrep notes for completed episodes can be found here (Not all episodes have prep notes).If you are enjoying the podcast please leave a review on your preferred podcast app!Feel like giving back? Donations of caffeine gratefully received!https://www.buymeacoffee.com/idiotspod
The Internet of Things (IoTs) has seamlessly integrated into our everyday routines, from smart home gadgets, wearable devices, industrial equipment, and even urban infrastructure. As more devices are interconnected to the network, it becomes increasingly crucial to have strong security measures in place. In this blog post, we will look into the complexities of IoTs, the approaches required to safeguard digital and physical assets, and future trends and advancements. What is IoT? The Internet of Things (IoTs) is the interconnected network of physical devices embedded with softwares, sensors, and other technologies that enable them to connect and share data or information with other devices and systems via the Internet. These devices can be as basic as sensors or smart home appliances, or as advanced as industrial machinery and infrastructure components. The importance of IoT lies in its ability to enhance operational efficiency, improve real-time decision-making, increase automation, and facilitate data-driven insights across various industries and personal applications. View More: A Comprehensive Guide to IoT Security
Hosts Will Larry and Victoria Guido chat with Sanghmitra Bhardwaj, CEO and Founder of Insusty. Sanghmitra shares her journey from a small village in the foothills of the Himalayas to becoming a founder in France, driven by firsthand experiences with climate disasters and a passion for sustainable living. Insusty, a sustainability loyalty program, is a platform incentivizing individuals to adopt climate-positive actions through rewards, thereby fostering a community motivated towards environmental stewardship. The show digs into the mechanics and vision of Insusty, highlighting how the platform rewards eco-friendly actions like volunteering and donating, rather than purchases. This approach aims to bridge the gap between the desire for sustainable living and the practical challenges individuals face, such as the perceived high costs of sustainable products. Sanghmitra reveals the evolution of Insusty, including strategic pivots towards niche markets within the circular economy and the importance of transparency and impact measurement in building trust with consumers. Towards the episode's conclusion, the conversation shifts to broader implications of sustainability in technology and business. Sanghmitra expresses curiosity about future expansions of Insusty, particularly in tracking and rewarding individual daily eco-actions more effectively. She also touches upon the challenges and triumphs of being a solo female founder in the tech and sustainability sectors, underscoring the significance of community, perseverance, and innovation in driving change. Insusty (https://www.insusty.info/) Follow Insusty on LinkedIn (https://www.linkedin.com/company/insusty/), Instagram (https://www.instagram.com/theinsusty/), or X (https://twitter.com/the_insusty). Follow Sanghmitra Bhardwaj on LinkedIn (https://www.linkedin.com/in/sanghmitra-bhardwaj-515428236/) or X (https://twitter.com/sustainwithsan). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: WILL: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Will Larry. VICTORIA: And I'm your other host, Victoria Guido. With me today is Sanghmitra Bhardwaj, CEO and Founder of Insusty, a sustainability loyalty program for individuals. Sanghmitra, thank you for joining us. SANGHMITRA: Thank you so much for having me here. I'm super excited for the podcast and to discuss various topics that we are about to. And I'm sure that it's going to be a learning experience, not just for the audience, but also for me. So, thank you for this opportunity. VICTORIA: Why don't we just start off getting to know you a little bit? Tell us something exciting going on in your life, maybe outside of work. SANGHMITRA: Okay, so, well, recently, I joined a pole dancing class. I wanted to challenge myself and see if I have the core strength that I need to be strong. And I also feel that it's something that I always wanted to do to come out of my comfort zone. So, it's been fun so far. VICTORIA: I tried that, and I thought that I would naturally be good at it because I'm a rock climber. And so, I thought I'd have all the right muscle groups, but the coordination and [laughs], like, expression of it is still challenging if you've never done it before. SANGHMITRA: Yeah, definitely. And I think there are some techniques and if you don't do it right, like, you will not get it at all, those poses and, like, how you climb the pole and everything. So, I completely relate to your experience here. VICTORIA: I want to do more dance, actually, because the mind-body connection and getting into that feeling of flow is really interesting for me. And I think it's like expressing through your body, which 80% of communication is non-verbal, which is really interesting. SANGHMITRA: Yeah, that's true. Just to add to it, I wanted to also share with you that I used to do modeling back in India, and I really love expressing myself with my body. And it's been super interesting to see that. And also, when I have conversations with other people, these are the things that I observe a lot. Is it the same for you? Do you also observe other people's body language when they are talking to you and probably change some topics that you are trying to discuss? VICTORIA: Yeah, absolutely. You can tell if people are listening to what you're saying. They, like, lean in a little bit, or if they're not really wanting to relate to what you're saying, they're, like, crossing their arms in front of you. So, as someone who works in business development, I definitely pay a lot of attention [laughs] to all that stuff. But I'm curious, how did you go from being a model in India to founder and CEO where you are today? SANGHMITRA: That's something that I would love to talk about, and also, it has to do from where I come from. So, I come from a very small village in the foothills of the Himalayas. There, I witnessed climate disasters firsthand. In 2013, there were a lot of cloudbursts happening in those areas. An entire village next to my village disappeared completely without a trace. And those were some moments in my life where I really felt like we live in a world where you can be far from Europe...for example, currently, I live in France, and here, when heat wave happens, we all suffer and people talk about it. But I have seen, like, the adverse effect of what it can lead to. So, there was a part of me that always wanted to do something in terms of the impact that I create, like, with my work. So, I started doing modeling, which was something for myself as well to gain some confidence. At the same time, I worked with sustainable brands in India. I modeled for them, and then I discovered their work. I got inspired by it, and I realized that it's something that interests me a lot, and I wanted to pursue my studies in it to know more about it. So, that's when I came to France to pursue my master's in sustainable finance to discover more about this field and to see where I belong. And finally, I founded Insusty, where I could see that I could bring my inspiration from the sustainable brands that I worked with. Whether it's from the fashion or, the food industry, or the travel industry, I could see the inspiration coming from there. At the same time, I could see how we need to create mass adoption through incentivizing climate action, which was something that I explored during my studies. And I kind of went with Insusty, and that was the beginning of my founder journey. WILL: I have a question about the way you grew up, and you're saying in a village. Can you expound a little bit on that? Because you said, climate change wiped out an entire village. And so, when I saw that in the email, I was like, I don't think I've ever had a chance to actually talk to someone that lived in a village. I grew up in the United States. So, like, help paint that picture. When you say you grew up in a village, what do you mean by that? What was it like growing up in a village, and also, what do you mean by the next village got entirely wiped out? SANGHMITRA: Yeah. Living in a village it's like being a part of a tiny, well-knitted community, and it's, like, everyone knows everyone. And sometimes it's good, sometimes it's bad because when people gossip, of course, it spreads like a wildfire. As well as when you need support and when you need help, this community is always there, too. So, the part of belonging to such a community and to kind of engage with people is something that I really enjoy about coming from a small village. And that's something that I oftentimes search in France, where I can be a part of such communities as well, where people inspire each other. For example, currently, I'm a part of a wonderful community of women of color founders living in Europe. It's called Founderland. And it's thanks to Founderland that I found you then I could join this podcast. So, when it comes to the small village, this is what I really really love about it is the small knitted community we have. When I say that the entire village next to my village disappeared, I mean that when there was the cloudburst in the mountain, the soil and everything drowned the entire village. So, there was a school, and we used to hear a story about the school, where the kids were told by the teachers to run because there is a cloudburst, and "We are about to die if we stay in this place." And as a student, as a kid, what do you think first? You think about packing your bags instead of running. So, the kids ended up packing their bags before they could run, and by that time, it was too late. So, this is just one of the heartbreaking stories that I'm sharing with you right now, but it had been something that really left a mark in my life. VICTORIA: I really appreciate you sharing that story because when I talk to people about climate change, I think it's really easy to get this nihilistic attitude about, well, climate change is going to kill us all in 20 years. So, why bother doing anything about it? And what I usually answer back is that climate change is already killing people. And then, it's happening in your own neighborhood, even, like, you know, I live here in San Diego, and it's always between, like, 60 and 80 degrees every single day [chuckles], but our beaches are collapsing. There are neighborhoods that are more impacted by pollution than others and are experiencing environmental impacts from that and their health, and everything like that. So, I'm curious how it all comes together with what you're doing with Insusty and how you're inspiring people to take action towards sustainability in the here and now. SANGHMITRA: Actually, I have a question for you and Will. I wanted to understand, for example, if you purchase something in terms of, for example, it's related to fashion, or it's related to food products, what is the criteria that's most important to you? And maybe probably you can tell me, like, the top three criteria that are most important for you when you buy something. And then, I would love to share how Insusty can help you buy better. VICTORIA: When I'm looking to buy things, I look for, like, price. I want it to be reasonable, but I also don't want it to be so cheap that it means it's a really poor quality. So, I want to find that balance between, like, quality and price. And I do also care about sustainability, and, like, what is the background of the company that I'm buying it from? You know, what's their reputation? What's their, like, practices? Like one example is, like, the rugs for your house. So, I like to buy rugs that are made from sustainable fabrics and dyes and that I can wash them because I have a dog. And so, that's kind of, like, what I think through when I buy things. But it's not always easy, especially with clothing, because it seems like anyone who makes clothing, there's just always this risk of it being sourced at some part in the manufacturing pipeline having to do with either child labor or really terrible sustainability practices. WILL: Yeah. I would say, for me, early on, especially when I was growing up, we didn't have a lot of money, so it was just whatever is the cheapest, whatever we could afford at that moment. It wasn't really looking into the quality, or sustainability, or any of those items. Some of the stuff I look back on that I ate often, I'm like, whoa, man, that was not the best thing. But it was the cheapest, and it was what we ate and things like that. So, now that I'm older, my wife has been talking to me about some of that stuff, and it's like, oh, I had no idea, because of the environment I grew up in, that, like, that's even affecting me. And that was kind of why I asked you about the village thing is because I feel like we can get in a bubble sometimes and not even be aware of what's happening to other people. And I think, Victoria, you said something about people not understanding climate change. It's kind of tough at times to talk about climate change when you live in...where I'm at in Florida, it's like, okay, it gets hot, and then it gets cold. And yeah, we have a hurricane every now and then, but whenever you told the story about the village, it's like, oh, wow, like, that's a different game. That's a different level. I didn't even know about that. So, I think that's kind of my journey now is I am starting to understand sustainability. I think a lot of times I still have that I grew up with nothing mindset and want to get the cheapest thing because sometimes buying sustainability is super expensive. So, that's why I'm glad that I'm talking to you, so maybe I can learn some of those things. So yeah, that's kind of been my journey with it. SANGHMITRA: That's really wonderful to get your insights because now I can tell you confidently what we do. Basically, when I talk to people, it was generally the same thing that I asked them, "What's the most important thing when you buy, like, the top three most important things?" Sustainability was definitely one of them, but cost was always there. Regardless of the background that they are from, cost was something that they all thought about. So, what we do at Insusty is that we incentivize individuals to do something good for the planet. It can be, for example, you want to volunteer at an NGO next to your place. You want to get rewarded. So, what we do is we offer you loyalty points that help you to buy from sustainable brands. So, you try these products because, oftentimes, as Will also mentioned, there is a perception, and it's also a reality, that sustainable products tend to be more expensive. So, we try to deal with that by offering a loyalty program that incentivizes climate action. And in terms of the sustainable brands, they get new customer base. They get to interact with these customers. They get to see their product and sites. What is something that the customers really like? What is something that can be improved? How can they improve in terms of their own sustainability and their impact? For example, their supply chain operations and so on. So, it's something that we provide them and help them also with insights as well as new customer base. We try to support them with that. At the same time, on an individual level, we help with the cost factor, which is one of the most important things. When we want people to change, when we want people to adopt sustainable lifestyle, we kind of need to incentivize that so that mass adoption can be possible. VICTORIA: So, I'm imagining, like, I want to know a new brand that I want to buy clothes from, like essential clothes. I could go into the app and, like, find companies that produce the thing that I want, and then I could get points and rewards for buying consistently from that brand. SANGHMITRA: So, we are not like an actual loyalty program. So, you only receive points when you do something good for the planet. You don't receive points when you purchase from brands. This is a loyalty program where we give you points when you do something good for the planet, for example, donations. For NGOs, we have volunteer programs that individuals can participate in and receive loyalty points. But in the future, we are ambitious, and we want to go far. And we think that each and every activity of an individual can be tracked in terms of sustainability, how they are segregating their waste at home, how they're managing that, and so on, and give them points for each of their eco actions. VICTORIA: Awesome. Yeah. Okay. I love that. Yeah. So, what kind of things would earn me points, like, in my home ownership here? SANGHMITRA: If you volunteer with an NGO nearby or if you would like to participate in an event, for example, if you want to donate clothes, all these eco actions can give you loyalty points for the moment. And in the future, we want to also track the actions that you do at home. You save electricity, for example. You want to walk to the office instead of taking a cab, and all these activities, so that we can kind of make the experience also for the user a bit more like a game so that they enjoy doing it at the same time they receive rewards. And they can make purchases as well with the sustainable brands on our platform. VICTORIA: I like that because I've been talking with my partner about how do we live more sustainably, or how do we, like, reduce our consumption or give back. And I think if it was gamified and we got points for it, it's more motivating because then you also see that other people are doing it as well. And so, you're part of a community that's all trying to take the same action. And that will have a bigger impact than just one individual, right? SANGHMITRA: Yes, definitely. And we do have that feature on our platform where you could see near your area who donated and who is working in a particular NGO, so based on the fact that if the individual is comfortable in sharing that. Most of the time, when someone does something good for the planet, they would love to show it to the rest of the world. So, we have seen that people love to share their experiences and their badges, saying that, okay, they donated, for example, five euros to this NGO, and so on. So, they really love that. And it feels also really good to see this community and to get inspired by it. Mid-Roll Ad: When starting a new project, we understand that you want to make the right choices in technology, features, and investment but that you don't have all year to do extended research. In just a few weeks, thoughtbot's Discovery Sprints deliver a user-centered product journey, a clickable prototype or Proof of Concept, and key market insights from focused user research. We'll help you to identify the primary user flow, decide which framework should be used to bring it to life, and set a firm estimate on future development efforts. Maximize impact and minimize risk with a validated roadmap for your new product. Get started at: tbot.io/sprint. WILL: I think it's going to take all of us doing something to help with climate change and to make a difference. So, I like how you're incentivizing. You're making a difference. You say you get reward points. So, once I do an item or an action and I get reward points, what does that look like on the backend of it? SANGHMITRA: For the individuals they have a dashboard to track their actions. They have a dashboard to also track what they are purchasing. So, if they're purchasing food or they're purchasing more items related to fashion, they can also check that. They can check the total number of points that they have received so far, where they have used it, and so on. And at the backend, for us, we see it as the total number of transactions that are taking place, so, for example, how the loyalty point is being used. So, we have APIs that are in place between our platform and the platforms of other sustainable brands in our network. So, in our backend, we can see the transactions; for example, an individual used 100 points to get 10% off from one of the sustainable brands on our platform. And in terms of the sustainable brand side, even they have their own dashboard. They can also track how many individuals are using their points on their platform, and so on. So, they also have access to their own analytics dashboard. And through the same application, they can also provide us the payments through subscription and transaction fees. VICTORIA: Yeah, that's really interesting. And so, I understand that you've been in the journey for a little while now. And I'm curious: if you go back to when you first got started, what was surprising to you in the discovery phase and maybe caused you to pivot and change strategy? SANGHMITRA: So, one thing that I pivoted with was the type of brands that we wanted to onboard. Before, we had a very open approach; for example, we want brands that are sustainable, or if they are upcycling, or if they have, like, a particular social impact attached to it or an environmental impact attached to it. So, we were focusing on having the horizons a bit like the aspects of choosing a sustainable brand to be a partner. It was a bit broader for us. But when we talked with the people, they wanted a niche. For example, they wanted upcycle products. They wanted more brands in the circular economy domain. And that's when we realized that we need to have a niche. So, we focus on the brands that are more linked towards circular economy that are promoting the values of recycling, upcycling, and reusing the products. So, that was when we pivoted with the idea that we should not be open to all sustainable brands. However, we need to be really accurate with our approach. We need to focus on a particular niche. At the same time, we need to also make sure that we measure their impact and report it to our customers to ensure transparency on our platform. So, that became a priority more than having more and more brands on our platform. WILL: Yeah, I really...that was actually one of my questions I was going to ask you because I like how you are vetting them because I've, especially here in the States, I've seen, like, companies, like, slap 'non-GMO' or 'gluten-free.' And it's like, well, it doesn't even have wheat in it, so, like, yeah, it's gluten-free. So, it's like, it's more of a marketing thing than actually, like, helping out. So, I'm glad you're vetting that. How has that process going for you? SANGHMITRA: It's actually going really well, and we have established a five-step onboarding process. And in the first two steps, we also focus on measuring their impact. We have a self-evaluation form. We also check if they have some existing certificates. We also make sure that we have enough data about their supply chain and how they are working. And these are some of the information that we also share with our consumers, the one who would be interested to buy products from these brands, to make sure that we are transparent in our approach. There's also one more thing that we do. It's the quarterly reporting. So, every three months, we also report the individuals who are buying from sustainable brands on our platform that, okay, this brand did better this quarter because they implemented a process that, for example, is reducing a certain amount of emissions from their supply chain, or any other departments. So, these are some of the information that we also share with the individuals. VICTORIA: And what does success look like now versus six months from now or five years from now? SANGHMITRA: For the moment, success would look like for me to have more connections, more people who support our project and our initiative, and the more people joining us. In terms of the next six months, I think it would be linked to fundraising. But I wouldn't go so far at the moment because, for me, I take one day at a time. And this is something that has been super helpful for me to streamline my tasks. So, I take one day at a time, and it's working really well for me. WILL: What are some of your upcoming hurdles that you see? SANGHMITRA: When I talk about hurdles, I often see it in two parts, one being the internal hurdles and the other one being external. So, in terms of the internal hurdles, it can be something like I'm putting myself in a box that, okay, I'm a single woman founder. How can I do something good? And just doubting myself and things like that. These are some of the internal hurdles that I'm working on every day [chuckles]. I'm also talking to executive coaches to get their advice on how I can improve myself as well to overcome these internal hurdles. However, in terms of the external hurdles, these are some things that are not in my control, but I try my best to make the most of it. Currently, in terms of the external hurdles, I would say that I live in a country where I used to not even speak the language. So, initially, the hurdle that I experienced was mostly the cultural hurdle. But now it's more related to the fact that I am a single female founder, and there are perceptions around it that you need to have a co-founder. And there are a lot of different noises everywhere that doesn't allow you to grow. VICTORIA: And you're not just a founder, but you're also an author. And I wanted to ask you a little bit about your book, the Sustainability Pendulum. Can you share a little bit about what it is and why you wrote it? SANGHMITRA: So, Sustainability Pendulum is the book that I wrote last year, and I always wanted to write it. And last year, I put myself to work, and I was like, at least every day, I'm going to try and write one page, and probably by the end of the year, I can finish the book [chuckles], and that's what I did. I had to be super consistent. But I came up with Sustainability Pendulum, and it's about the stories from the past and the sustainable approaches that we had in the past, how we used to...in different religions, we have some stories written in the scriptures related to sustainable practices. And oftentimes, when we talk about sustainability today, we talk about the future. We talk about implementing different technologies and, doing a lot of innovations, and so on. However, we don't look into the past and see how efficiently things were handled when it came to sustainability in the past. And these are some of the stories from the past, from different religions, and how it transcends to today's sustainability issues and solutions. So, that's what the book is about. And why it's called the pendulum, it's because how the pendulum moves. I think it's obvious [laughs], so the pendulum's to and fro motion. It goes to the past, and it goes to the future. So, that was the whole concept behind the sustainability pendulum. WILL: That's amazing that you wrote a book, much, much respect on that. I am not an author, so...And I also know because my wife she's been talking about writing a book and the different challenges with that. So, kudos on writing a book. Would you write another one? SANGHMITRA: Actually, I would love to. I'm just looking for something that equally inspires me how it did for the last one. But I think once you come out of that space and you're consistent with writing the book or consistently working to achieve something, I think eventually it comes to you. So, I don't know what are the challenges that your wife mentions that she faced in writing the book. WILL: Like, having enough to write about, like you said, just sitting down each day writing a book. And I think publishing a book is tough. I know we've come a long ways, like, you can self-publish now instead of going through publishing companies, and just those different avenues of how many steps it takes. It's not just writing a book, sitting down and writing a book, and sharing with everyone. It's multiple steps that you have to go through. SANGHMITRA: Definitely. I couldn't agree more with you on this one. Just to add to it, how I managed to do this was also because I structured the book earlier. And in order to also publish it, I realized that I don't want to wait. And I self-published the book as soon as I found out that, okay, this is perfect, and it's ready. I need to just move forward with it. What helped me as well was the way I structured the book earlier. And then, I was like, okay, every day, this is what I'm going to work on. And it kind of helped me to get to the end of it. WILL: That's awesome. I like how you had forethought and how it made it easier for you to come up with ideas and write it. So, that's awesome. SANGHMITRA: I wish the best to your wife as well for her book. And I hope that once it's ready, you will let me know about it. WILL: Yes, I definitely will. You're talking about being a woman founder who is single. I don't want to assume. So, why is it tough for you to be a woman founder who's single? SANGHMITRA: When I say single female founder, it means that I don't have a co-founder. It's not, like, my relationship status but just [laughs] the fact that... WILL: Yes. Yes. [laughter] SANGHMITRA: Just that I am a single founder, like, then I don't have a co-founder, which oftentimes poses as a risk, especially when you talk to an investor. This is what I feel based on my experience. But I think the times are changing, and I feel that the more the project is growing, the better it is getting in terms of the people who are interested as well to be a part of Insusty as an investor or as a partner. Things have become better now than they were a few years ago. So, I can see the change. But, initially, I did used to feel low about it that, okay, I'm a single female founder, and oftentimes, it was considered as a challenge. But if you take my perspective, I think, for me personally, it possibly was also one of my biggest strengths because I could be that one person going to the meetings, and I felt that people were more open to share things. They did not feel threatened by me. And that was something that really helped me to also form connections with people. VICTORIA: I love how you connect having a small community in your village where you grew up to creating a community around yourself as a founder and having a village that supports you, and you feel comfortable around the community as well, and as part of that community. If you could go back in time and give yourself some advice when you were first getting started with Insusty, what advice would you give yourself? SANGHMITRA: Slow is good. When I say that, I mean that every time we talk about different startups and different companies, and it's always about how rapidly the startup is growing, how exponentially they are growing, and so on. But I feel that in terms of when you really want to create an impact, and you are in the green tech space as well, being slow and getting somewhere is better than going fast and then having a burnout. So, one of the things that I would tell myself when I just started would be slow is good. WILL: Even with coding and a lot of things in life, I feel like that's really good advice: slow is good. Slow down––enjoy the moment. So, I like that advice. VICTORIA: I was going to say, it sounds like a more sustainable pace for yourself also [laughs]. SANGHMITRA: Exactly. VICTORIA: Sustainability in the environment, and also in our own energy, and emotions, and motivation to get things done. So, I love that. WILL: I see what you did there [laughter]. VICTORIA: Yeah, [inaudible 30:40] all back. Do you have anything else that you'd like to promote? SANGHMITRA: I would really love to also tell people that I'm very open to communication. So, if anyone would like to reach out to me on LinkedIn, it would be really awesome, and we can get on a call as well. I have my Calendly link right on my profile, and I'm very open to communication. So, if there is someone who would like to talk to me about any of the things that interest them or probably something that they could advise me or I could learn from them, I'm more than open to do so. VICTORIA: Love that. And then, do you have any questions for me or Will? SANGHMITRA: So, in terms of the development part, I do have some questions, like, in the technical side. So, when it comes to the fact that we have to kind of calculate the eco actions of individuals in the future, we want to also see if we can calculate the daily actions that they do, for example, walking instead of taking a cab, or segregating their waste, et cetera. I wanted to know, in the future, I want to implement these features, but can we actually get a perfect product around it? Is that possible where we can track everything? WILL: Yeah. So, when you say track everything, like, I know you talked about walking and some of the different actions. Can you expound on that? SANGHMITRA: For example, instead of IoTs...because I know that some hotels they do use IoT devices to track the water consumption, and so on. However, on an individual level, how can we just track it through the smartphone or through the app that they have? Because, okay, walking can be tracked. This is actually one of the challenges I'm facing, so I want to just be open about it, and I'm very open to ideas also. If you have some ideas that I could experiment around, I would really love to. In terms of the activities like walking, waste disposal, and so on, do you think that there are some kind of features that we could implement to track these actions? One of the things that I was thinking about was we let people take a photo of how they are segregating the waste in the end, and through that, we can tell them, "Okay, this is great," and we give them the points. But how can we do it and also automate it at the same time? VICTORIA: So, one approach that I know when people work at thoughtbot on these types of issues and trying to figure out, like, what is the right feature? How are we going to implement this? Going through a product design sprint where you spend a week with a product designer and someone who can, you know, really quickly create MVPs. And you go through this process of figuring out what's the most important feature. And you're talking to users, and you're trying to...you're going through that discovery process in a short period. And we actually have a video series where we walk through every step of that process. But, like, for me personally, things that I can think of in my life that I would want to track one thing I've been trying to do more is actually electronic recycling, which in the U.S. my neighborhood is different. It's only open on, like, Thursdays and Saturdays. And I have to, like, really remember to go out there and, like, put my electronics out there. And I don't think it's very, like, well-known. So, I think that would be something interesting to, like, promote as possible. And we also have the green bins now, which are new, which allow you in California to, like, have composting. So, you have now your regular trash, your recycling, and your compost bins. So, actually, like, trying to use those and track them. Otherwise, one of the things I think about is, like, reducing the amount of plastic consumption, which includes things like, you know, when you buy toilet paper, it comes wrapped in plastic. How can I incentivize myself and my partner and even my family to, like, switch away from those types of products and get more into, you know, using towels instead of paper towels or finding alternative methods for getting those products while reducing the amount of plastic that comes with it? SANGHMITRA: That's super interesting. I'm really, really glad to have your insights as well. I do have a question for you. Have you worked with startups in the field of impact? And if so, what have been some of the ideas that you really loved to implement? VICTORIA: Yeah, actually, we had another guest on the Giant Robots podcast who I think you're connected with as well who created essentially, like, a GoFundMe but for environmental projects and in areas that, you know, a 5,000 grant to help do a beach cleanup could have a really big impact. Like funding programs and marketplace for those types of green projects in areas that are the most impacted by climate change and have the fewest resources to actually do anything about it. So, I thought that was really exciting in trying to figure out how can we use tech to solve problems for real people, and for people that don't typically get the focus or the majority of the funding, or the majority of time spent in those communities. So, that, I think, is what is really exciting: to see people come from those communities and then figure out how to build solutions to serve them. SANGHMITRA: That's really wonderful. Is there, like, a specific market where you have seen growth of such startups and companies more? The companies especially you have worked with in the past and in there in the field of impact, are they mostly from the U.S., or which are the markets they are from essentially? VICTORIA: Yeah. So, I mean, I'm from the U.S., so that's where I see the most. I'm in San Diego. So, when I go to, like, startup weeks and things like that, that's where I'm getting the majority of my exposure. I do also know that there is a Bloomberg Center focusing on excellence and data in the governments. And that's not just U.S.-based but going more global as well, so trying to teach civic leaders how they can use the data about whether it's sustainability or other issues that they're facing too, like, figure out how to prioritize their funding and in what projects they're going to invest in from there. So, I think that's really interesting. I don't know, I don't know what the answer is, but I know that there are some countries that are hoping to make the investments in sustainability and ecotourism, as opposed to allowing industry to come in and do whatever they want [laughs]. So, I don't know if that answers your question or not. SANGHMITRA: Yeah, I think it completely answers my question. Thank you for sharing that and also a bit more. WILL: There's so many things that I've learned through the podcast. So, I'm excited to see the impact it has. And I think you're doing an amazing job. VICTORIA: Thank you so much for coming on and being with us here today and sharing your story. SANGHMITRA: Thank you. WILL: You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@ giantrobots.fm. And you can find me on Twitter @will23larry. VICTORIA: And you can find me on Mastodon @thoughtbot.social@vguido. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
This episode features a collaboration with Jame and Callum from the ID:IOTS podcast. They are two infectious disease physicians that just like Let's Talk Micro, their goal is to share information and help educate the audience. We join forces to talk about the laboratory. We start by going over what a Medical laboratory Scientist does ( I talk about what I do). We also go over questions like what do we expect to see in a laboratory requisition. Do we add media based on what the source or the requisition indicates? How can we improve communication between the laboratory and providers? Tune in to find out. ID:IOTS podcast page: https://idiotspodcasting.buzzsprout.comPlease download and review episodes
Time to "idiot proof" the myth of bactericidal being better than bacteriostatic agents onthis collaborative episode with the ID_IOTS podcast.About our Guest: Jame and Callum are the hosts of the ID_IOTS podcast, an Infectious Disease podcast. You can find them through https://idiotspodcasting.buzzsprout.com/share wherever you get your podcasts and also on Twitter as @IDiots_pod References from this episode: https://idiotspodcasting.buzzsprout.com/1782416/12537247-44-the-basics-of-beta-lactamase-inhibitorsWald-Dickler N, Holtom P, Spellberg B. Busting the Myth of ‘Static vs Cidal': A Systemic Literature Review. Clin Infect Dis. 2018 17;66(9):1470–4.Use of bacteriostatic agents in Neutropenic fever: DOI: 10.1586/eri.09.11. Jaksic B, Martinelli G, Oteyza JP, Hartman CS, Leonard LB, Tack KJ. Efficacy and Safety of Linezolid Compared with Vancomycin in a Randomized, Double-Blind Study of Febrile Neutropenic Patients with Cancer.Clinical Infectious Diseases. 2006 Mar 1;42(5):597–607https://www.bradspellberg.com/shorter-is-better Visit the Microbe Mail website to sign up for updates E-mail: mail.microbe@gmail.comYouTube: Microbe MailInstagram: Microbe_Mail
2 years. It's been a hell of a ride. Here's to 2 more. Links from this episode: KASIC 1-page evidence-summary on aminopenicillins for resistant enterococcal UTIs: https://bit.ly/44CtNnqAUDIENCE SURVEY!!! Please do fill this out, it would mean a lot to us: https://forms.gle/aCq2djmfKPjA6tkMA. Support the showQuestions, comments, suggestions to idiotspodcasting@gmail.com or on X/Threads @IDiots_podPrep notes for completed episodes can be found here (Not all episodes have prep notes).If you are enjoying the podcast please leave a review on your preferred podcast app!Feel like giving back? Donations of caffeine gratefully received!https://www.buymeacoffee.com/idiotspod
It's time for journal club! Jame and Callum discuss community acquired pneumonia treatment and a paper from 2021 which examines shorter duration therapy for patients responding to therapy.Paper discussed:Dinh, Aurélien et al. “Discontinuing β-lactam treatment after 3 days for patients with community-acquired pneumonia in non-critical care wards (PTC): a double-blind, randomised, placebo-controlled, non-inferiority trial.” Lancet 2021;397: 1195-1203. doi:10.1016/S0140-6736(21)00313-5Shorter is Better site:https://www.bradspellberg.com/shorter-is-betterSupport the showQuestions, comments, suggestions to idiotspodcasting@gmail.com or Tweet us @IDiots_podPrep notes for completed episodes can be found here: https://1drv.ms/u/s!AsaWoPQ9qJLShugmB2EOm8FMePNBtA?e=IKApb5If you are enjoying the podcast please leave a review on your preferred podcast app!Feel like giving back? Donations of caffeine gratefully received!https://www.buymeacoffee.com/idiotspod
This episode features a conversation with Callum, an Infectious Disease physician and one of the co-hosts of ID:iots, an infectious disease podcast. What is their podcast about? What kind of work does an Infectious Disease doctor do? How do their work relates to the laboratory? Tune in for a great talk about the laboratory, challenges we have, and more.Link to ID:iots podcast: https://idiotspodcasting.buzzsprout.com
Callum and Jame talk about everyone's favourite antibiotic class that covers Staph, Pseudomonas and Gram negatives, is predominantly renally excreted and isn't a quinolone, the aminoglycosides. We go over classification, mechanism of action, spectrum, resistance mechanisms, PK/PD, penetrance to various tissues, toxicity, and dosing. This is a companion piece to our star appearance on Febrile in August: https://player.captivate.fm/episode/0777091c-0991-42d4-89a9-4032d77b4fa5Questions, comments, suggestions to idiotspodcasting@gmail.com or Tweet us @IDiots_podPrep notes for completed episodes can be found here: https://1drv.ms/u/s!AsaWoPQ9qJLShugmB2EOm8FMePNBtA?e=IKApb5If you are enjoying the podcast please leave a review on your preferred podcast app!Feel like giving back? Donations of caffeine gratefully received!https://www.buymeacoffee.com/idiotspod
In this episode of The New CISO, Steve is joined by Steve Magowan, Vice President of Cyber Security at BlackBerry, to discuss how military teachings apply to tech. First starting his career in the air force, Steve understands how the military mindset can make you an asset in the security field. Through evaluating the benefits of his experience, Steve shares what CISOs can learn from military professionals. Listen to the episode to learn more about the importance of understanding IoTs, the military work ethic, and how quality leadership stems from a lack of ego. Listen to Steve and Steve discuss the key qualities of a leader and breaking into cyber security: Meet Steve (1:39) Host Steve Moore introduces our guest today, Steve Magowan. Steve reveals how long he's worked for BlackBerry. Steve Magowan explains how his background in the air force led to his cyber security career, where he utilizes his tech abilities and wears many hats. A Canadian In The Air Force (4:44) Steve asks Steve Magowan, a Canadian, what was more challenging about the air force: the cold in Canada or dealing with Americans? Steve shares that the real difficulty is flying through the congestion above the United States. He realized how empty most of Canada is, which makes for great training grounds. A Transition Opportunity (8:19) Steve Magowan shares how his various skill sets suited him well for transitioning into cyber security and how there are more needs for people who understand IoT applications. Although having this skill set is now recognized as vitally important, it's challenging to find someone with tech abilities who can also manage a team. Due to their work ethic and unique perspective, the military has become a worthwhile option for recruiting cybersecurity professionals. The Military Mindset (13:56) Steve and Steve discuss the differences between non-military and military security professionals. Host Steve notes that people who have served tend to be more willing to work long hours and share their perspectives to manage a crisis. Steve Magowan explains that much of this team mentality comes from the “us and we and ours” core of their military training. Moving Into Cyber Security (17:00) Although Steve did not have a direct cyber security background, a family friend knew of a job for him in the field. With years of consulting and IoT experience at 38 years old, Steve was well suited to transition into, at first, an IT team due to his leadership skills. He recognizes that his military experience opened the door for him, but his hunger for knowledge made him succeed. Bringing Leadership To The Table (22:38) For aspiring CISOs, host Steve presses Steve on which qualities helped break him into the field and assure employers of his leadership abilities. Steve reiterates that his military background made him a worthwhile candidate, partly due to his lack of ego. Steve knows he's not the most intelligent guy in the room, which makes him want to learn and figure out how to solve any security problems that come his way. The Emerging Problem (27:55) Supply chain risks are a growing threat, a challenge to people in the cyber security world. Steve Magowan shares how security professionals have dealt with these types of breaches and the differing objectives between business leaders and CISOs. Differing Agendas (31:15) Steve and Steve discuss the conflicting agendas between CIOs and CISOs. Corporate America has not fully grasped the increasing cyber threats, making it harder for CISOs to do their jobs. CISOs have accepted high-risk positions, which is why they must learn how to communicate with CFOs with their interests in mind: money and business outcome. See You Next Time (41:16) With so much to discuss, especially third-party and supply chain risks, host Steve invites Steve back to the show. The New CISO (42:25) To Steve Magowan, a CISO is someone who is an enabler versus a barrier. A...
Gentamicin over a best-a-lactam?! Listen in as Jame & Callum from the https://idiotspodcasting.buzzsprout.com/ (ID:IOTS) podcast extoll the virtues of aminoglycosides in empiric therapy…and teach Sara some Scottish slang https://febrilepodcast.com/episodes/ (Episodes) |https://febrilepodcast.com/consult-notes/ ( Consult Notes) |https://febrilepodcast.captivate.fm/listen ( Subscribe) |https://twitter.com/febrilepodcast ( Twitter) |https://febrile.bigcartel.com/ ( Merch) | febrilepodcast@gmail.com
Host James Benham is joined by Chad Hollingsworth from Insight Risk Technologies. James & Chad discuss risk, data, iot & its place in insurance.Find us on social media!We're on Twitter, Facebook, Instagram, and LinkedIn, or follow James on Twitter!Subscribe, rate, and comment.As always -Enjoy the Ride & Geek Out!
In this episode about Chamberlain MyQ, Jeff and Mike talk IoTs, letting people into your garage, and Amazon's continued dominance.
ID:IOTS is 1 year old today! Jame and Callum have a short chat about the the podcast, why we started and where we're going. Of course, the real ID:IOTS are the friends we made along the way…If you want to send us cake email idiotspodcasting@gmail.com
Callum and Jame provide an overview of the Carbapenem class of antibiotics discussing: what they are; their spectrum of action (looking in particular at what they don't cover); resistance; how to use them; and side effects.This site has a useful summary of the new formulation of carbapenems and their spectra:http://www.microbiologynutsandbolts.co.uk/the-bug-blog/new-sweets-on-the-market-but-will-these-new-flavours-live-up-to-their-vibrant-coloursHere is the NEJM paper on Tebipenem Jame mentions:https://www.nejm.org/doi/full/10.1056/NEJMoa2105462Send suggestions to idiotspodcasting@gmail.com
Na live de hoje o @lesolarenco falou sobre a aplicação das IoTs nós sistemas de #HVACR, a abrangência dessas aplicações, usos comuns e como está tecnologia também se relaciona com outras atividades como gestão de pessoas, facilities, estoque etc. @prohoundcontroles #IoT #tecnologia #gestão #comectividade Corre pra assistir, deixa seu comentário, curte e compartilha. Episódios semanais! Siga nossas redes sociais: Intagram - https://www.instagram.com/canaloclima/ Facebook - https://www.facebook.com/canaloclima Twiter - https://twitter.com/canaloclima LinkdeIn - https://www.linkedin.com/in/canal-o-clima Anchor: https://anchor.fm/canal-o-clima Podcast disponível em: Deezer | Spotify | ITunes | Anchor.fm Amazon Music Player |Google Podcast | CastBox --- Send in a voice message: https://anchor.fm/canal-o-clima/message
About VenkatVenkat Venkataramani is CEO and co-founder of Rockset. In his role, Venkat helps organizations build, grow and compete with data by making real-time analytics accessible to developers and data teams everywhere. Prior to founding Rockset in 2016, he was an Engineering Director for the Facebook infrastructure team that managed online data services for 1.5 billion users. These systems scaled 1000x during Venkat's eight years at Facebook, serving five billion queries per second at single-digit millisecond latency and five 9's of reliability. Venkat and his team also created and contributed to many noted data technologies and open-source projects, including Facebook's TAO distributed data store, RocksDB, Memcached, MySQL, MongoRocks, and others. Prior to Facebook, Venkat worked on tools to make the Oracle database easier to manage. He has a master's in computer science from the University of Wisconsin-Madison, and bachelor's in computer science from the National Institute of Technology, Tiruchirappalli.Links Referenced: Company website: https://rockset.com Company blog: https://rockset.com/blog TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest episode is one of those questions I really like to ask because it can often come across as incredibly, well, direct, which is one of the things I love doing. In this case, the question that I am asking is, when you look around at the list of colossal blunders that people make in the course of careers in technology and the rest, it's one of the most common is, “Oh, yeah. I don't like the way that this thing works, so I'm going to build my own database.” That is the siren call to engineers, and it is often the prelude to horrifying disasters. Today, my guest is Venkat Venkataramani, co-founder and CEO at Rockset. Venkat, thank you for joining me.Venkat: Thanks for having me, Corey. It's a pleasure to be here.Corey: So, it is easy for me to sit here in my beautiful ivory tower that is crumbling down around me and use my favorite slash the best database imaginable, which is TXT records shoved into Route 53. Now, there are certainly better databases than that for most use cases. Almost anything really, to be honest with you, because that is a terrifying pattern; good joke, terrible practice. What is Rockset as we look at the broad landscape of things that store data?Venkat: Rockset is a real-time analytics platform built for the cloud. Let me break that down a little bit, right? I think it's a very good question when you say does the world really need another database? Don't we have enough already? SQL databases, NoSQL databases, warehouses, and lake houses now.So, if you really break it down, the first digital transformation that happened in the '80s was when people actually retired pen and paper records and started using a relational database to actually manage their business records and what have you instead of ledgers and books and what have you. And that was the first digital transformation. That was—and Oracle called the rows in a table ‘records' for a reason. They're called records to this date. And then, you know, 20 years later, when all businesses were doing system of record and transactions and transactional databases, then analytics was born, right?This was, like, the whole reason why I wanted to make better data-driven business decisions, and BI was born, warehouses and data lakes started becoming more and more mainstream. And there was really a second category of database management systems because the first category it was very good at to be a system of record, but not really good at complex analytics that businesses are asking to be able to guide their decisions. Fast-forward 20 years from then, the nature of applications are changing. The world is going from batch to real-time, your data never stops coming, advent of Apache Kafka and technologies like that, 5G, IoTs, data is coming from all sorts of nooks and corners within an enterprise, and now customers in enterprises are acquiring the data in real-time at a scale that the world has never seen before.Now, how do you get analytics out of that? And then if you look at the database market—entire market—there are still only two large categories of databases: OLTP databases for transaction processing, and warehouses and data lakes for batch analytics. Now suddenly, you need the speed of OLTP at the scale of batch, right, in terms of, like, complexity of compute, complexity of storage. So, that is really why we thought the data management space needs that third leg, and we call it real-time analytics platform or real-time analytics processing. And this is where the data never stops coming; the queries never stopped coming.You need the speed and the scale, and it's about time we innovate and solve the problem well because in 2015, 2016, when I was researching for this, every company that was looking to solve build applications that were real-time applications was building a custom Rube Goldberg machine of sorts. And it was insanely complex, it was insanely expensive. Fast-forward now, you can build a real-time application in a matter of hours with the simplicity of the cloud using Rockset.Corey: There's a lot to be said that the way we used to do things after the first transformation and we got into the world of batch processing, where—in the days of punch cards, which was a bit before my time and I believe yours as well—where they would drop them off and then the next day, or two days, they would come back later after the run, they would get the results only to figure out syntax error because you put the wrong card first or something like that. And it was maddening. In time, that got better, but still, nightly runs have become a thing to the point where even now, by default, if you wind up looking at the typical timing of a default Linux install, for example, you see that in the middle of the night is when a bunch of things will rotate when various cleanup jobs get done, et cetera, et cetera. And that seemed like a weird direction to go in. One of the most famous Google April Fools Day jokes was when they put out their white paper on MapReduce.And then Yahoo fell for it hook, line, and sinker, built out Hadoop, and we've been stuck with this idea of performing these big query jobs on top of existing giant piles of data, where ideally, you can measure it with a wall clock; in practice, you often measure the calendar in some cases. And as the world continues to evolve, being able to do streaming processing and understand in real-time what is going on, is unlocking different approaches, at least by all accounts. Do you have an example you can give me of a problem that real-time analytics solves for a customer? Because I can sit here and talk all day about how things might theoretically work, but I have to get out of my Route 53-based ivory tower over here, what are customers seeing?Venkat: That's a great question. And I want one hundred percent agree. I think Google did build MapReduce, and I think it's a very nice continuation of what happened there and what is happening in the world now. And built MapReduce and they quickly realized re-indexing the whole world [laugh] every night, as the size of the internet is exploding is a bad idea. And you know how Google index is now? They do real-time indexing.That is how they index the wor—you know, web. And they look for the changes that are happening in the internet, and they only index the changes. And that is exactly the same principle behind—one of the core principles behind Rockset's real-time analytics platform. So, what is the customer story? So, let me give you one of my favorite ones.So, the world's number one or number two buy now, pay later company, they have hundreds of millions of users, they have 300,000-plus merchants, they operate in, like, maybe 100-plus countries, so many different payment methods, you can imagine the complexity. At any given point in time, some part of the product is broken, well, Apple Pay stopped working in Switzerland for this e-commerce merchant. Oh God, like, we got to first detect that. Forget even debugging and figuring out what happened and having an incident response team. So, what did they do as they scale the number of payments processed in the system across the world—it's, like, in millions; first, it was millions in the day, and there was millions in an hour—so like everybody else, they built a batch-based system.So, they would accumulate all these payment records, and every six hours—so initially, it was a day, and then afterwards, you know, you try to see how far I can push it, and they couldn't push it beyond every six hours. Every six hours, some batch job would come and process through all the payments that happened, have some statistical models to detect, hey, here are some of the things that you might want to double-click and follow up on. And as they were scaling, the batch job that they will kick off every six hours was starting to take more than six hours. So, you can see how the story goes. Now, fast-forward, they came to us and say—it's almost like Rockset has, like, a big red button that says, “Real-time this.”And then they kind of like, “Can you make this real-time? Because not only that we are losing millions of potential revenue dollars in a year because something stops working and we're not processing payments, and we don't find out about that up to, like, three hours later, five hours later, six hours later, but our merchants are also very unhappy. We are also not able to protect our customers' business because that is all we are about.” And so fast-forward, they use Rockset, and simply using SQL now they have all the metrics and statistical computation that they want to do, happens in real-time, that are accurate up to the second. All of their anomaly detectors run every minute and the anomaly detectors take, like, hundreds of milliseconds to run.And so, now they've cut down the business observability, I would say. It's not metrics and machine observability is actually the—you know, they have now business observability in real-time. And that not only actually saves them a lot of potential revenue loss from downtimes, that's also allowing them to build a better product and give their customers a better experience because they are now telling their merchants and their customers that something is not working in some part of your e-commerce footprint before even the customers notice that something is wrong. And that allows them to build a better product and a better customer experience than their competitors. So, this is a very real-world example of why companies and enterprises are moving from batch to real-time.Corey: With the stories that you, and frankly, a lot of other data analytics companies tend to fall back on all the time has been stories of the ones you're telling, where you're talking about the largest buy now, pay later lender, for example. These are companies operating at massive scale who have tremendous existing transaction volume, and they're built out already. That's great, but then I wanted to try to cut to the truth of some of these things. And when I visit your pricing page at Rockset, it doesn't have what I would expect if that were the only use case. And what that would be is, “Great. Call here to conta—open up a sales quote, and we'll talk to you et cetera, et cetera, et cetera.”And the answer then is, “Okay, I know it's going to have at least two commas in it, ideally, not three, but okay, great.” Instead, you have a free tier where it's, “Hey, we'll give you a pile of credits, here's some limits on our free account, et cetera, et cetera.” Great. That is awesome. So, it tells me that there is a use case here for folks who have not already, on some level, made a good show of starting the process of conquering the world.Rather, someone with an idea some evening at two in the morning can wind up diving in and getting started. What is the Twitter for Pets, in my garage, spare-time side project story for using something like Rockset? What problem will I have as I wind up building those things out, when I don't have any user traffic or data yet, but I want to, you know for once in my life, do the smart thing in advance rather than building an impressive tower of technical debt?Venkat: That is the first thing we built, by the way. When we finish our product, the first thing we built was self-service. The first thing we built was a free forever tier, which has certain limits because somebody has to pay the bill, right? And then we also have compute instances that are very, very affordable that cost you, like, approximately $1 a day. And so, we built all of that because real-time analytics is not a need that only, like, the large-scale companies have. And I'll give you a very, very simple example.Let's say you're building a game, it's a mobile game. You can use Amazon DynamoDB and use AWS Lambdas and have a serverless stack and, like, you're really only paying… you're kind of keeping your footprint very, very small, and you're able to build a very lively game and see if it gets [wider 00:12:16], and it's growing. And once it grows, you can have all the big company scaling problems. But in the early days, you're just getting started. Now, if you think about DynamoDB and Lambdas and whatnot, you can build almost every part of the game except probably the leaderboard.So, how do I build a leaderboard when thousands of people are playing and all of their individual gameplays and scores and everything is just another simple record in DynamoDB. It's all serverless. But DynamoDB doesn't give me a SQL SELECT *, order by score, limit 100, distinct by the same player. No, this is a analytical question, and it has to be updated in real-time, otherwise, you really don't have this thing where I just finished playing. I go to the leaderboard, and within a second or two, if it doesn't update, you kind of lose people along the way. So, this is one of actually a very popular use case, when the scale is much smaller, which is, like, Rockset augments NoSQL database like a Dynamo or a Mongo where you can continue to use that for—or even a Postgres or MySQL for that case where you can use that as your system of record and keep it small, but cover all of your compute-heavy and analytical parts of your application with Rockset.So, it's almost like kind of a CQRS pattern where you use your OLTP database as your system of record, you connect Rockset to it, and so—Rockset comes in with built-in connectors, by the way, so you don't have to write a single line of code for your inserts and updates and deletes in your transactional database to get reflected in Rockset within one to two seconds. And so now, all of a sudden you have a fully indexed, fast SQL replica of your transactional database that on which you can do all sorts of analytical queries and that's fully isolated with your transactional database. So, this is the pattern that I'm talking about. The mobile leaderboard is an example of that pattern where it comes in very handy. But you can imagine almost everybody building some kind of an application has certain parts of it that is very analytical in nature. And by augmenting your transactional database with Rockset, you can have your cake and eat it too.Corey: One of the challenges I think that at least I've run into when it comes to working with data—and let's be clear, I tend to deal with data in relatively small volumes, mostly. The stuff that's significantly large, like, oh, I don't know, AWS bills from large organizations, the format of those is mostly predefined. When I'm building something out, we're using, I don't know, DynamoDB or being dangerous with SQLite or whatnot, invariably I find that even at small-scale, I paint myself into a corner by data model design or how I wind up structuring access or the rest, and the thing that I'm doing that makes perfect sense today winds up being incredibly challenging to change later. And I still, in production and have a DynamoDB table that has the word ‘test' in its name because of course I do.It's not a great place to find yourself in some cases. And I'm curious as to what you've seen, as you've been building this out and watching customers, especially ones who already had significant datasets as they move to you. Do you have any guidance around how to avoid falling down that particular well?Venkat: I will say a lot of the complexity in this world is by solving the right problems using the wrong tool, or by solving the right problem on the wrong part of the stack. I'll unpack this a little bit, right? So, when your patterns change, your application is getting more complex, it is demanding more things, that doesn't necessarily mean the first part of the application you build—and let's say DynamoDB was your solution for that—was the wrong choice. That is the right choice, but now you're expanded the scope of your application and the demand that you have on your backend transactional database. And now you have to ask the question, now in the expanded scope, which ones are still more of the same category of things on why I chose Dynamo and which ones are actually not at all?And so, instead of going and abusing the GSIs and other really complex and expensive indexing options and whatnot, that Dynamo, you know, has built, and has all sorts of limitations, instead of that, what do I really need and what is the best tool for the job, right? What is the best system for that? And how do I augment? And how do I manage these things? And this goes to the first thing I said, which is, like, this tremendous complexity when you start to build a Rube Goldberg machine of sorts.Okay, now, I'm going to start making changes to Dynamo. Oh, God, like, how do I pick up all of those things and not miss a single record? Now, replicate that to another second system that is going to be search-centric or reporting-centric, and do I have to rethink this once in a while? Do I have to build and manage these pipelines? And suddenly, instead of going from one system to two system, you actually end up going from one system to, like, four different things that with all the pipes and tubes going into the middle.And so, this is what we really observed. And so, when you come in to Rockset and you point us at your DynamoDB table, you don't write a single line of code, and Rockset will automatically scan your Dynamo tables, move that into Rockset, and in real-time, your changes, insert, updates, deletes to Dynamo will be reflected in Rockset. And this is all using Dynamo Streams API, Dynamo Scan API, and whatnot, behind the scenes. And this just gives you an example of if you use the right tool for the job here, when suddenly your application is demanding analytical queries on Dynamo, and you do the right research and find the right tool, your complexity doesn't explode at all, and you can still, again, continue to use Dynamo for what it is very, very good at while augmenting that with a system built for analytics with full-featured SQL and other capabilities that I can talk about, for the parts of your application for which Dynamo is not a good fit. And so, if you use the right tool for the job, you should be in very good place.The other thing is part about this wrong part of the stack. I'll give a very kind of naive example, and then maybe you can extrapolate that to, like, other patterns on how people could—you know, accidental complexities the worst. So, let's just say you need to implement access control on your data. Let's say the best place to implement access control is at the database level, just happens to be that is the right thing. But this database that I picked, doesn't really have role-based access control or what have you, it doesn't really give me all the security features to be able to protect the data the way I want it.So, then what I'm going to do is, I'm going to go look at all the places that is actually having business logic and querying the database and I'm going to put a whole bunch of permission management and roles and privileges, and you can just see how that will be so error-prone, so hard to maintain, and it will be impossible to scale. And this is what is the worst form of accidental complexity because if you had just looked at it that one week or two weeks, how do I get something out, or the database I picked doesn't have it, and then the two weeks, you feel like you made some progress by, kind of like, putting some duct tape if conditions on all the access paths. But now, [laugh] you've just painted yourself into a really, really bad corner.And so, this is another variation of the same problem where you end up solving the right problems in the wrong part of the stack, and that just introduces tremendous amount of accidental complexity. And so, I think yeah, both of these are the common pitfalls that I think people make. I think it's easy to avoid them. I would say there's so much research, there's so much content, and if you know how to search for these things, they're available in the internet. It's a beautiful place. [laugh]. But I guess you have to know how to search for these things. But in my experience, these are the two common pitfalls a lot of people fall into and paint themselves in a corner.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: A question I have, though, that is an extension is this—and I want to give some flavor to it—but why is there a market for real-time analytics? And what I mean by that is, early on in my tenure of fixing horrifying AWS bills, I saw a giant pile of money being hurled over at effectively a MapReduce cluster for Elastic MapReduce. Great. Okay, well, stream-processing is kind of a thing; what about migrating to that? Well, that was a complete non-starter because it wasn't just the job running on those things; there were downstream jobs, and with their own downstream jobs. There were thousands of business processes tied to that thing.And similarly, the idea of real-time analytics, we don't have any use for that because of, oh I don't know, I only wind up pulling these reports on a once-a-week basis, and that's fine, so what do I need that updated for in real-time if I'm looking at them once a week? In practice, the answer is often something aligned with the, “Well, yeah, but you had a real-time updating dashboard, you would find that more useful than those reports.” But people's expectations and business processes have shaped themselves around constraints that now can be removed, but how do you get them to see that? How do you get them to buy in on that? And then how do you untangle that enormous pile of previous constraint into something that leverages the technology that's now available for a brighter future?Venkat: I think [unintelligible 00:21:40] a really good question, who are the people moving to real-time analytics? What do they see? And why can they do it with other tech? Like, you know, as you say… EMR, you know, it's just MapReduce; can't I just run it in sort of every twenty-four hours, every six hours, every hour? How about every five minutes? It doesn't work that way.Corey: How about I spin up a whole bunch of parallel clusters on different timescales so I constantly—Venkat: [laugh].Corey: Have a new report coming in. It's real-time, except—Venkat: Exactly.Corey: You're constantly putting out new ones, but they're just six hours delayed every time.Venkat: Exactly. So, you don't really want to do this. And so, let me unpack it one at a time, right? I mean, we talked about a very good example of a business team which is building business observability at the buy now, pay later company. That's a very clear value-prop on why they want to go from batch to real-time because it saves their company tremendous losses—potential losses—and also allows them to build a better product.So, it could be a marketing operations team looking to get more real-time observability to see what campaigns are working well today and how do I double down and make sure my ad budget for the day is put to good use? I don't have to mention security operations, you know, needing real-time. Don't tell me I got owned three days ago. Tell me—[laugh] somebody is, you know, breaking glass and might be, you know, entering into your house right now. And tell me then and not three days later, you know—Corey: “Yeah, what alert system do you have for security intrusion?” “So, I read the front page of_The New York Times_ every morning and waiting to see my company's name.” Yeah, there probably are better ways to reduce that cycle time.Venkat: Exactly, right. And so, that is really the need, right? Like, I think more and more business teams are saying, “I need operational intelligence and not business intelligence.” Don't make me play Monday morning quarterback.My favorite analogy is it's the middle of the third quarter. I'm six points down. A couple of people, star players in my team and my opponent's team are injured, but there's some in offense, some in defense. What plays do I do and how do I play the game slightly differently to change the outcome of the game and win this game as opposed to losing by six points. So, that I think is kind of really what is driving businesses.You know, I want to be more agile, I want to be more nimble, and take, kind of, being data-driven decision-making to another level. So that, I think, is the real force in play. So, now the real question is, why can they do it already? Because if you go ask a hundred people, “Do you want fast analytics on real-time data or slow analytics on stale data?” How many people are going to say give me slow and stale? Zero, right? Exactly zero people.So, but then why hasn't it happened yet? I think it goes back to the world only has seen two kinds of databases: Transaction processing systems, built for system of record, don't lose my data kind of systems; and then batch analytics, you know, all these warehouses and data lakes. And so, in real-time analytics use cases, the data never stops coming, so you have to actually need a system that is running 24/7. And then what happens is, as soon as you build a real-time dashboard, like this example that you gave, which is, like, I just want all of these dashboards to automatically update all the time, immediately people respond, says, “But I'm not going to be like Clockwork Orange, you know, toothpicks in my eyelids and be staring at this 24/7. Can you do something to alert or detect some anomalies and tap on my shoulder when something off is going on?”And so, now what happens is somebody's actually—a program more than a person—is actually actively monitoring all of these metrics and graphs and doing some analysis, and only bringing this to your attention when you really need to because something is off, right? So, then suddenly what happens is you went from, accumulate all the data and run a batch report to [unintelligible 00:25:16], like, the data never stops coming, the queries never stopped coming, I never stop asking questions; it's just a programmatic way of asking those things. And at that point, you have a data app. This is not a analytics dashboard report anymore. You have a full-fledged application.In fact, that application is harder to build and scale than any application you've ever built before [laugh] because in those situations, again, you don't have this torrent of data coming in all the time and complex analytical questions you're asking on the data 24/7, you know? And so, that I think is really why real-time analytics platform has to be built as almost a third leg. So, this is what we call data apps, which is when your data never stops coming and your queries never stop coming. So, this is really, I think, what is pushing all the expensive EMR clusters or misusing your warehouse, misusing your data lakes. At the end of the day, is what is I think blowing up your Snowflake bills, is what blowing up your warehouse builds because you somehow accidentally use the wrong tool for the job [laugh] going back to the one that we just talked about.You accidentally say, “Oh, God, like, I just need some real-time.” With enough thrust, pigs can fly. Is that a good idea? Probably not, right? And so, I don't want to be building a data app on my warehouse just because I can. You should probably use the best tool for the job, and really use something that was built ground up for it.And I'll give you one technical insight about how real-time analytics platforms are different than warehouses.Corey: Please. I'm here for this.Venkat: Yes. So really, if you think about warehouses and data lakes, I call them storage-optimized systems. I've been building databases all my life, so if I have to really build a database that is for batch analytics, you just break down all of your expenses in terms of let's say, compute and storage. What I'm burning 24/7 is storage. Compute comes and goes when I'm doing a batch data load, or I'm running—an analyst who logs in and tries to run some queries.But what I'm actually burning 24/7 is storage, so I want to compress the heck out of the data, and I want to store it in very cheap media. I want to store it—and I want to make the storage as cheap as possible, so I want to optimize the heck out of the storage use. And I want to make computation on that possible but not efficient. I can shuffle things around and make the analysis possible, but I'm not trying to be compute-efficient. And we just talked about how, as soon as you get into real-time analytics, you very quickly get into the data app business. You're not building a real-time dashboard anymore, you're actually building your application.So, as soon as you get into that, what happens is you start burning both storage and compute 24/7. And we all know, relatively, [laugh] compute and RAM is about a hundred to a thousand times more expensive than storage in the grand scheme of things. And so, if you actually go and look at your Snowflake bill, if you go look at your warehouse bill—BigQuery, no matter what—I bet the computational part of it is about 90 to 95% of the bill and not the storage. And then, if you again, break down, okay, who's spending all the compute, and you'll very quickly narrow down all these real-time-y and data app-y use cases where you can never turn off the compute on your warehouse or your BigQuery, and those are the ones that are blowing up your costs and complexity. And on the Rockset side, we are actually not storage-optimized; we're compute-optimized.So, we index all the data as it comes in. And so, the storage actually goes slightly higher because the, you know, we stored the data and also the indexes of those data automatically, but we usually fold the computational cost to a quarter of what a typical warehouse needs. So, the TCO for our customers goes down by two to four folds, you know? It goes down by half or even to a quarter of what they used to spend. Even though their storage cost goes up in net, that is a very, very small fraction of their spend.And so really, I think, good real-time analytics platforms are all compute-optimized and not storage-optimized, and that is what allows them to be a lot more efficient at being the backend for these data applications.Corey: As someone who spends a lot of time staring into the depths of AWS bills, I think that people also lose sight of the reality that it doesn't matter what you're spending on AWS; it invariably pales in comparison to what you're spending on people to work with these things. The reason to go to cloud is not because it is the cheapest possible way to get computers to do things; it's because it's a capability story. It's about unlocking capacity and capabilities you do not have otherwise. And that dramatically increases your feature velocity and it lets you to achieve things faster, sooner, with better results. And unlocking a capability is always going to be more interesting to a company than saving money on it. When a company cares first, last, and always about just save money, make the bill lower, the end, it's usually a company in decline. Or alternately, something very strange is going on over there.Venkat: I agree with that. One of our favorite customers told us that Rockset took their six-month roadmap and shrunk it to a single afternoon. And their supply chain SaaS backend for heavy construction, 80% of concrete that are being delivered and tracked in North America follows through their platform, and Rockset powers all of their real-time analytics and reporting. And before Rockset, what did they have? They had built a beautiful serverless stack using DynamoDB, even have AWS Lambdas and what-have-you.And why did they have to do all serverless? Because the entire team was two people. [laugh]. And maybe a third person once in a while, they'll get, so 2.5. Brilliant people, like, you know, really pioneers of building an entire data stack on AWS in a serverless fashion; no pipes, no ETL.And then they were like, oh God, finally, I have to do something because my business demands and my customers are demanding real-time reporting on all of these concrete trucks and aggregate trucks delivering stuff. And real-time reporting is the name of the game for them, and so how do I power this? So, I have to build a whole bunch of pipes, deliver it to, like, some Elasticsearch or some kind of like a cluster that I had to keep up in real-time. And this will take me a couple of months, that will take me a couple of months. They came into Rockset on a Thursday, built their MVP over the weekend, and they had the first working version of their product the following Tuesday.So—and then, you know, there was no turning back at that point, not a single line of code was written. You know, you just go and create an account with Rockset, point us at your Dynamo, and then off you go. You know, you can use start using SQL and go start building your real-time application. So again, I think the tremendous value, I think a lot of customers like us, and a lot of customers love us. And if you really ask them what is one thing about Rockset that you really like, I think it'll come back to the same thing, which is, you gave me a lot of time back.What I thought would take six months is now a week. What I thought would be three weeks, we got that in a day. And that allows me to focus on my business. I want to spend more time with my stakeholders, you know, my CPO, my sales teams, and see what they need to grow our business and succeed, and not build yet another data pipeline and have data pipelines and other things coming out of my nose, you know? So, at the end of the day, the simplicity aspects of it is very, very important for real-time analytics because, you know, we can't really realize our vision for real-time being the new default in every enterprise for whenever analytics concern without making it very, very simple and accessible to everybody.And so, that continues to be one of our core thing. And I think you're absolutely right when you say the biggest expense is actually the people and the time and the energy they have to spend. And not having to stand up a huge data ops team that is building and managing all of these things, is probably the number one reason why customers really, really like working with our product.Corey: I want to thank you for taking so much time to talk me through what you're working on these days. If people want to learn more, where's the best place to find you?Venkat: We are Rockset, I'll spell it out for your listeners ROCKSET—rock set—rockset.com. You can go there, you can start a free trial. There is a blog, rockset.com/blog has a prolific blog that is very active. We have all sorts of stories there, and you know engineers talking about how they implemented certain things, to customer case studies.So, if you're really interested in this space, that's one on space to follow and watch. If you're interested in giving this a spin, you know, you can go to rockset.com and start a free trial. If you want to talk to someone, there is, like, a ‘Request Demo' button there; you click it and one of our solutions people or somebody that is more familiar with Rockset would get in touch with you and you can have a conversation with them.Corey: Excellent. And links to that will of course go in the [show notes 00:34:20]. Thank you so much for your time today. I appreciate it.Venkat: Thanks, Corey. It was great.Corey: Venkat Venkataramani, co-founder and CEO at Rockset. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting crappy comment that I will immediately see show up on my real-time dashboard.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Callum and Jame provide an overview of the Cephalosporin class of antibiotics, discussing mechanism of action, spectrum, classification, PK/PD, side effects and some clinical uses.Send suggestions to idiotspodcasting@gmail.com
Callum and Jame provide an overview of the penicillin class of antibiotics, discussing mechanism of action, spectrum, classification, PK/PD, side effects and resistance mechanisms. Also you can laugh at Jame as he tries to pronounce "isoxazolyl"Send suggestions to idiotspodcasting@gmail.com
[קישור לקובץ mp3] בפרק מספר 422 של רברס עם פלטרפורמה - אני מתכבד לארח באולפן הוירטואלי שלי את ארז מטולה(רן) אז אם אתם מזהים את הקול הזה, זה בגלל שאתם מאזינים ממש-ממש-ממש אדוקים - עם ארז נפגשנו לפני 10 שנים - או יותר, אולי אפילו 11 שנים [מפה לשם כמעט 12…] - והקלטנו פרק, אז, על נושא של Penetration Testing [058 אבטחת מידע בתכנה software security, כולל הפתיח ההיסטורי למטיבי שמע], והנה אנחנו נפגשים שוב אחרי 10 או 11 שנים, כדי לראות מה התעדכן. רמז - הרבה . . . אז לפני שנכנס לעולם ה-Pen-Testing, ארז - ספר לנו, ככה בכמה מילים, עליך - (ארז) בשמחה - אני נמצא בתחום הזה של ה-Security בערך מאז שאני זוכר את עצמי . . . עוד בתור ילד, התעסקתי עם כל מיני שפות פיתוח ועם לפרוץ למשחקים ולעשות כל מיני דברים [לכאורה].היה לי ברור שזה הכיוון שלי, עוד בתור ילד היה לי ברור שאני איכשהו אשלב בין עולם המחשבים ועולם האבטחה - “הפריצות” אז קראנו לזה, עוד לא הייתה הגדרה לכזה דבר.ובאמת, בשביל לעשות את זה בצורה רצינית, היה לי ברור שגם צריך לעשות את זה בצורה “נכונה” ו”אקדמאית”, נקרא לזה.אז לאחר שלמדתי תואר ראשון ותואר שני בתחום, אמרתי “רגע, מה אני עוד יכול לעשות?”אולי אני אלך לעבוד בחברת פיתוח, כי בסך הכל אני מפתח תוכנה - אבל מצד שני, אני מאוד אוהב את ה-Security . . .אז אמרתי - רגע, בדיוק נולד תחום חדש שנקרא Application Security - אני מדבר איתך על לפני 20 שנה, כן? כשנכנסתי לעניינים - ואמרתי “איזה מגניב!”זה תחום שמשלב בין Security לפיתוח - בדיוק החיתוך הזה - ווואלה, נשמע לי מאוד מגניב, משהו שאני מאוד מתחבר אליו. ומאז גם התחלתי להתעסק עם כל מיני דברים שקשורים לכלים שפיתחתי, למחקרים שביצעתי, הרצאות שעשיתי במקומות כמו Black Hat ו-DevConאפילו יצא לי לכתוב ספר בנושא, שנקרא Managed Code Rootkitsומאז מאוד פיתחתי את התחום והשתדלתי לקחת סביבי הרבה מאוד אנשים שיטפלו בנושא הזהולפני משהו כמו 10 שנים הקמתי חברה בשם AppSec Labs - זו חברה שמתמחה בתחום ה-Application Security, ומה שאנחנו עושים בעצם זה בדיוק זה: אנחנו 15 איש, עושים Penetration Testing, עושים Code Review, מייעצים איך לכתוב אפליקציות בצורה בטוחה . . . כאשר המטרה המרכזית שלנו, בסופו של דבר, היא לגרום לעולם להיות מקום בטוח יותר, בהקשר של Software.(רן) מצויין, באמת הסטוריה ארוכה ומכובדת - לא הרבה יודעים, אבל גם אני התחלתי את הקריירה שלי כ-Pen-Tester, באיזשהו שלב . . . אחרי שסיימתי את הלימודים, זה היה אחד הדברים הראשונים שעשיתי, ואח”כ עברתי לכיוונים אחרים של Frontend ו-Backend ותשתיות - והיום Data Science, אבל כן, יש לי עדיין פינה חמה בלב לעולם ה-Pen-Testing וגם אני הייתי ב-Black Hat וכאלה, מכיר את החבורה . . .אבל בכל אופן, למי שאולי לא מכיר - הזכרנו את המילה הזו מספר פעמים: Pen-Testing: מה המשמעות? מה זה Pen-Testing? מה המשמעות של להיות Pen-Tester?(ארז) Pen-Testing זה, בצורה הכי נקרא-לזה-ככה-“מסונתזת”-שלו, זו מערכת, שיכולה להיות מערכת We-App או Mobile-App . . .ויכול להיות Pen-Test תשתיתי בכלל - Pen-Test לשרת קבצים, ל-IAS, ל-Apache . . . לא משנה מה, תמיד יש Target.בשורה התחתונה - המטרה היא להפיק דוח, להפיק רשימת Vulnerabilities, בעיות שנמצאו במערכת - על מנת שהצד השני - בדרך כלל בעל המערכת - יוכל להבין בפני מה הוא עומד.אם בעל המערכת יודע שיש לו איזושהי מערכת, ואין לו כל כך מושג אילו בעיות יש שם - אז הדבר הכי קרוב לפורץ אמיתי, שיפרוץ לו למערכת וינצל את זה - זה לקחת מישהו, נקרא לזה “מהטובים” - Penetration Tester, שבצורה מסודרת ומבוקרת ובתיאום עם אותו גורם, יבצע לו [עבורו] סוג של “סימולציה של האיש הרע”רק שבמקום שהוא באמת ינצל את הפרצות האלה ויעשה עם זה משהו, הוא פשוט בא ואחרי זה אומר לו “הנה, תראה - אלו הן הבעיות שמצאתי והנה, מההבנה שלי את הבעיות, אני גם יכול להגיד לך איך כדאי לך לטפל ולתקן אותן”.(רן) בסדר גמור, מעולה - אז אפשר לחשוב על Pen-Tester כעל “שודד טוב”: מישהו שמדמה פריצה אבל בסופו של דבר נותן לך דוח ולא גונב לך את הכסף, או את שאר הדברים . . .אז המקצוע הזה, כמו שאמרת, התחיל כבר לפני 20 שנה או יותר - אבל בוא נדבר על מה שקורה היום, זאת אומרת - מה התחדש, לפחות נאמר ב 5-10 שנים האחרונות, מבחינה טכנולוגית, מבחינה מתודולוגית . . . מה חדש בזמן האחרון?(ארז) אז קודם כל המון השתנה . . . אם אני אקביל את זה למה שהיה אז, בפגישה הקודמת שלנו לפני ~15 שנה, אז העולם היה מאוד פשוט . . .אז היתה לך טכנולוגיה אחת, בדרך כלל, שרת Web אחד . . . הכל היה מאוד הומוגני.הרוב היה רץ על IAS-ים, בדרך כלל מה שכתבו היו Web-Apps עם ASP . . . בהמשך התחיל NET.אם כבר היו אפקליציות Web-יות אז הן היו רק Java . . . היה מאוד מצוצמם.בדרך כלל, מי שעשה Penetration Testing בתקופה ההיא היו סוג של לקוחות מאוד-מאוד ממוקד - זה יכול להיות . . . בדרך כלל בנקים או תעשיות בטחוניות וכאלה.היום,Literally, כולם עושים Penetration Testing - כי כולם מבינים שזה צורך מאוד חשובוזה איזשהו שינוי מאוד מהותי שאנחנו רואים היום - שכולם עושים כל הזמן, כולם עושים להכל, לא רק לאותן אפליקציות שהן, ככה חשופות.ואם נסתכל רגע על ההבדל המשמעותי - אני אגיד את זה במשפט אחד ואני אפתח את זה: בשורה התחתונה, היום הרבה יותר מורכב לבצע Penetration Testing ממה שבוצע בעבר.היום, למשל, כשאנחנו מסתכלים על Target - אני, ברשותך, אתמקד בעולם שאני מכיר ושוחה ומומחה בו, תחום ה - Applications . . . אם אני מסתכל על Applications - ואגב Applications זה מושג מאוד רחב: זה יכול להיות Web-Apps, זה יכול להיות Mobile-Apps, זה יכול להיות IOTs, זה יכול להיות REST APIs, ו . . . You-name-it . . . כל עולם ה-Softwareבקיצור, הום הרבה יותר מורכב לבצע Penetration Testing, כי הפרופיל של ה-Penetration Tester הוא כזה שהוא צריך להיות הרבה יותר ורסטיליהוא לא יכול להכיר רק טכנולוגיה אחת, הוא לא יכול לבוא ולהגיד “אני יודע רק טכנולגויה אחת - אני יודע רק לבדוק Web-App מסוג Java!”הוא צריך להכיר טכנולוגיות שונות, הוא צריך לדעת את ההבדלים . . . מה ההבדל בין אפליקציה שנגיד מותקנת On-Prem - שזה, אגב, היה בעבר בעיקר On-Prem - לבין, פתאום, אפליקציות שהן . . . היום כמעט שאין On-Prem, רק בסביבות מיוחדות אתה תראה On-Prem.היום הרוב זה SaaS - אם ניקח את זה עוד שלב קדימה, היום הכל כמעט בנוי מעל תשתיות Cloudו-SaaS לא בהכרח אומר Cloud, יכול להיות שיש מישהו שיש לו SaaS שלא בהכרח משתמש בכל ה-Advanced Features שיש ל-Cloud Providers, כמו Storage של Encryption Keys וכמו שירותים שאתה “זורק את הקוד שלך” ויש לך איזה Lambda Function . . . אתה זורק את הקוד ואתה לא צריך בכלל תשתיות . . .אלו דברים שמאוד השתנו - ולכל סוג של מערכת, לפי ה-Deployment שלה ולפי הטכנולוגיה שלה, יש ממש סט של בעיות שאותו Pen-Tester צריך להכיר.בשורה התחתונה - ב-Pen-Testing, יש לך זמן קבוע - זה לא, ככה, “תבדוק כמה שאתה רוצה”, תמיד יש זמן קבוע - בסופו של דבר, Pen-Testing זו פעילות מסחרית, שיש לה זמן מוקצב, ואחד מהאתגרים הכי גדולים שיש ל-Pen-Tester, מעבר לטכנולוגיה, זה לדעת איך הוא “משחק נכון” עם השעות - איך הוא עושה פיזור נכון, אופטימלי, של השעות שלואיפה הוא שם את השעות אל מול ההסתברות הגבוהה למציאת Vulnerabilities - הייתי אומר שזה שם המשחק היום.(רן) אז אני מנסה, ככה, לדמיין איך נראה היום שלך, או של אחד העובדים בחברה שלך . . . אז נגיד, יש לקוח עם חוזה חדש ועכשיו יש לך, לצורך העניין, איזשהו “בנק-שעות” שאותו אתה הולך להשקיע ב-Pen-Testing - מה, זה מתחיל באנליזה? ארכיטקטורה של המערכת? שיחה עם מהנדסים, או שאתה מתייחס לזה כמו אל קופסא שחורה? זו השאלה ראשונה - עד כמה המערכת צריכה להיות “שקופה” אליך?שאלה שנייה היא האם יש איזשהו סט-כלים, Tools-of-Trade, שאיתם אתה תמיד מתחיל ראשון - ואז משם ממשיך הלאה, לפי הממצאים?(ארז) שאלה מצויינת, שאלות מצויינות . . . יש כמה שאלות שמתחבאות במה שהעלת . . .אני אתחיל, קודם כל, מאיזושהי הצהרה - בשורה התחתונה, כשעושים Penetration Testing, אפשר להגיד שהעולם מתחלק לשלושה סוגים - סוג אחד זה Black-Box, סוג שני זה White-Box, בצד השני של הסקאלה; ובאמצע נמצא Gray-Box.אני מאוד מאמין ב-Gray-Box . . . ואני אתחיל רגע בהסבר של מה כל אחד אומר . . .אז Black-Box אומר “קח את המערכת, עזוב'תי באמ'שלך ותחזור אלי עם דוח” - זה ממש, בשפה פשוטה . . .במקרה הטוב אתה מקבל Username ו-Password, יש לך נגיד את ה-URL של המערכת ו-User ו-Password וזהו, לא משתפים איתך פעולה.זו גישה אנכרוניסטית, לדעתי . . . היא מתאימה מאוד למצב שבו אתה יודע לחלוטין שבדקת את המערכת ואין שום דבר ויש סבירות מאוד נמוכה שימצאו [משהו] ועוד הרבה מאוד סיבות למה שתעשה Black-Box, יש עוד כמה . . .בשורה התחתונה, היא לא אופטימלית - אתה יכול לבזבז כמות שעות אדירה על דברים שאתה יכול לחלץ, את אותו Vulnerability, בשיחה של חמש דקות עם מתכנת, בסדר? . . . .או בלהסתכל בדיוק, לעשות Pin-point, ללכת ל-Class המתאים בקוד, כשאתה יודע איפה כנראה מסתתרת הלוגיקה שאתה רוצה לבחון - ופשוט להסתכל על הקוד ולהבין מה קורה שם.מהצד השני נמצא White-Box, שזה בעצם אומר “תן לי את הקוד, בוא נעשה White-Box Testing - תן לי את הקוד, אני בעיקר אסתכל עליו, אשאל שאלות, אסתכל על ה-Sequence Data וכו'” . . . ונמצא בעיות - נסתכל על ה-Design ונמצא בעיות.ויש את האמצע - האמצע זה ה-Gray-Box, שבעצם אומר “בוא נעשה את שניהם - בוא נשתמש בשני המכשירים, גם במכשיר ה-Pen-Testing ‘ה-Black-box-י' וגם במכשיר ‘ה-White-box-י', על מנת לאתר Vulnerabilities”שם המשחק הוא שבהינתן זמן נתון - קבוע, Fixed - אני רוצה למצוא את מקסימום ה=Vulnerabilitiesאני, כ-Pen-Tester, מאוד ארצה- כמו רופא שיכול לנתח ויש לו סט של מכשירים, שיכול להרים פעם את האיזמל הזה ופעם את ההוא וכו'אני רוצה לבוא ולהגיד שהייתי מאוד שמח, בהינתן בעיה נתונה שאני רוצה לבחון, לחשוב ולהגיד רגע, האם אני ניגש אליה במסלול . . .עם המכשיר של ה-Black, כי זה יותר נכון לבדוק אותה עם Black?אולי יותר נכון להסתכל עליה ב-White?או אולי נכון להתחיל Black, לעבור ל-White, לחזור ל-Black, לחזור ל-White . . . וככה בעצם, בצורה מאוד יעילה, לאתר את הבעיותוזה מוביל אותי לשאלה ששאלת - מהי המתודולוגיה של צורת הבדיקה? הPipline הוא כזה:עוד לפני שמתחיל Penetration Testing, נהוג לעשות משהו שנקרא Scoping - ו-Scoping זה תהליך שהוא חצי-עסקי וחצי-טכנולוגי - תהליך שבו מדברים עם הלקוח, עוד לפני שיש הצעת מחיר, לפני שיודעים מה בכלל הולכים לבדוק וכו' - ושואלים אותו “תגיד, מה מעניין אותך? מה היית רוצה לבדוק? בוא - שרטט לי גבולות גזרה, שרטט לי את הרכיבים שלך . . . האם ה-Web-App כן ב-Scope או לא ב-Scope? ה-REST API, שמדבר עם השירות-צד-שלישי שלך - כן להכניס אותו או לא להכניס אותו?”קודם כל, מחליטים איתו מה בכלל רוצים, מהם הגבולות גזרה, מבינים מה המורכבות של המערכת, כמה דפים יש לכל מערכת . . . כי הרי מערכת - לא מודדים אותה לפי משקלה בק”ג . . . מודדים אותה לפי כמות הדפים, כמות ה-APIs, עד כמה הם מורכבים . . .יכולות להיות שתי מערכות, לשתיהן עשר End-Points - אבל אחת היא סופר-מורכבת והשנייה היא כזאת פשוטה כזאת, כמה GET-ים פשוטים שמחזירים אינפורמציה . . . .אחרי שקובעים עם הלקוח את היקף הפעילות, מקבלים הצעת מחיר, הוא מאשר אותה, כל הצד הביזנסי . . . עברנו אותו.קובעים Kick-off - זה שלב סופר-חשוב ב-Pen-Test, זה שלב שבו, ביחד עם הלקוח, קובעים, בשלב הראשון של המערכת - מזמנים את כל הגורמים הרלוונטיים, בין עם זה ה-Pen-Testers וה-Product וה-Project Managers - זה מהצד שלנו, למשלומהצד של הלקוח - בדרך כלל את מי שמכיר את המוצר הכי טוב - מנהלת הפיתוח, לפעמים ה-CISO, מנהל מערכות מידע . . . גורמים מצד הלקוח.ורואים שקודם כל יש לנו את כל המידע שאנחנו צריכים - URL-ים ו-Password-ים וכל מה שצריך למערכות - רואים שהכל עובד, סופר-חשוב . . . גרוע להתחיל פעילות, ואז לגלות שפתאום אחת המערכות לא זמינה, כי אתמול ה-QA החליטו לעשות בדיקה ועשו איזו Stress-test או לא משנה מה . . . . תמיד יש סיפורים.בשלב הזה, של ה-Kick-off, זה השלב שבו נרצה גם לאושש את הנחות הייסוד שלנו, לגבי גבולות הגזרה - אני יכול לתת . . . לא חסרות דוגמאות, שפתאום מישהו מתעורר, מהצד של הלקוח, ואומר “רגע! המערכת הזו, שאמרתם שהיא ב-Scope - היא לא מוכנה, או שלא אמורים לבדוק אותה” - ויכול להיות גם מקרה הפוך, שמישהו יבוא ויגיד “רגע! מה עם השירות ההוא-וההוא? מה עם השירות שעכשיו עושה את Event-rule הזה? הוספנו את זה לפני כמה ימים וכן צריך להכניס אותו ל-Scope . . . .”אז זה בדיוק המקום שבו כל מיני דברים צפים.אחרי שעברנו את השלב הזה, מה שנהוג לעשות - ואני אחבר את זה רגע ל-Gray-Box - זה לקבוע שיחה עם אחד המתכנתים, מישהו שמכיר טוב את המערכת, וללכת איתו בשיטה של Cross-cut, לכל האיזורים שמעניינים ב-Security - ללכת איתו ממש ברמת ה-IDE, להגיד לו, למשל, “תפתח עכשיו ב-Visual Studio ותראה לי בבקשה איך אתה עושה Authentication ל-User-ים”, “תראה לי איך אתה חותם על JWT Tickets”, “קח אותי, למשל, לאותוריזציה (Authorization) - אני רוצה לראות את המודל-הרשאות שלך”או “אמרת לי שיש לך Database מסוג SQL - תגיד, אתה משתמש ב-Dynamic queries?” או “אמרת לי שאתה עובד ב-ORM - אני רוצה לראות בעיניים . . . קח אותי בבקשה ל-DALL, אני רוצה לראות בעיניים . . . “למה אני אומר את הדברים? כי אני יודע שעוד מעט אני אעשה את ה-Pen-Test, ואחד הדברים שאני אסתכל עליהם זה, למשל, זה SQL Injection . . . כשאני אבוא ל-SQL Injection, אם אני יודע, היה לי מידע פנימי, שאומר שלמשל - אין מצב ל-Dynamic queries בקוד, כי ראיתי בעין שהמתכנת משתמש ב-ORM, בסדר . . .בוא נניח שאין בעיה באיך שהוא מימש ORM . . . אז נניח שאני אומר שיש ORM - הסבירות שבה יש SQL Injection, שה-Run-time בכלל ג'ינרט (Generated) על מנת לגשת לדבר הזה - היא קלושה . . .זאת אומרת שאני יודע שאני אולי, בקטנה ככה, אוודא SQL Injection - אבל בשעות היקרות האלה, שהייתי אמור לבדוק SQL Injection - אני אשים אותן על משהו אחר . . . אני אמצא בעיה אחרת.ושוב אני מזכיר - זה משחק של הסתברויות . . . התפקיד של ה-Pen-Tester הוא לבוא ולראות איפה לשחק עם השעות שלו.אם אני אלך רגע קדימה - אז היום של ה-Pen-Tester הוא כזה שבהתחלה הוא סוג של, אם מתחיל הפרויקט, אז הוא סוג-של עושה Reconnaissance על המערכת, Information gathering . . . עובר על המערכת, אילו API-ים יש, כן WebSocket, לא WebSocket, מה עובר . . . אם זה עובר ב-JSON או עובר ב-Proto-Buff, או מה . . . .אגב, היסטוריה - פעם זה לא היה ככה, פעם ה-HTTP Request היה פשוט פרמטרים, כל מה שהיה צריך לעשות זה לשחק עם פרמטרים . . . היום פתאום זה הרבה יותר מורכב, יש Single Page authentication, אתה כבר לא יכול לעשות Crawling על כל המערכת ולדעת בצורה פשוטה, היום הדברים הרבה יותר מורכבים.ולכן, אחד הדברים החשובים ש-Pen-Tester עושה בהתחלה - הוא בונה לעצמו מודל של איך שהמערכת בנויה, והוא חושב כמתכנת - “אם אני הייתי בונה את זה . . .” - אני נכנס לראש של המתכנת ואני מבין את השיקולים שלו . . . “למה, למשל, את ה-Request הזה הוא העביר over WebSocket, ואת זה הוא העביר ב- REST API?” - כנראה שהייתה סיבה . . . כנראה שאת ה-WebSocket הוא צריך ל-Long-running Connection או משהו, ואני אראה שאם יש לו Long-running Connection, אז כנראה שבצד השני ה-User הוא כנראה Authenticated ברגע שהוא פתח Connection . . . זאת אומרת שיכול להיות שב-WebSocket אני אומתתי רק בפעם הראשונה שפתחתי את ה-Connection, ויכול להיות שכשאני אני אשלח את הבקשות הבאות, אם אני אעשה משחק על פרמטרים ואזין ID של User אחר או של Resource אחר - יש סיכוי גבוה יותר שאני אמצא אותו . . . למה? כי ב-REST API, מראש, בגלל שהוא State-less, בהקשר הזה - אז תמיד בודקים . . . יש כל מיני ניואנסים קטנים, שברגע שאתה נכנס לראש של כל מתכנת, זה נותן לך כל מיני טיפים על איפה כדאי לך להסתכל . . .בקיצור - אחרי שעשינו את כל שלב ההכנה ואיך שהמערכת בנויה ואיפה כנראה יש בעיות ו . . . אחד הדברים זה גם למפות פיצ'רים - למשל, יש Features של File upload או Download . . . מדי פעם זה Import או Export של כל מיני קבצים וזה - אז כבר אני יודע שב-Security test-cases שלי אני צריך לכסות Vulnerabilities כגון Directory traversal ו-Path manipulation ודברים כאלה . . . אם לא היה פיצ'ר כזה, שימו לב - זה Feature-Driven - אם לא היה פיצ'ר בכלל של File-ים, כנראה שלהתחיל לחפש Directory traversal היה נמוך יותר ברשימה שלי . . .זאת אומרת שאחד הדברים שה-Pen-Tester עושה - הוא גם בונה לו סוג של “רשימה ממויינת”: אילו Test-cases יותר מעניינים, ספציפית במערכת הזאת.זה קטע מאוד מעניין ומאוד מאתגר - וככל שיש יותר ניסיון, אנחנו גם רואים את זה, ש-Pen-Testers מנוסים יותר, הראשי-צוותים, הרבה פעמים . . . גם אם יש Pen-Tester מאוד טוב, שיודע לזהות בעיה מאוד מאוד טוב - הוא צריך את הניסיון של ה-Pen-Tester המנוסה יותר, שיגיד לו “שמע, יש לי תחושת בטן . . . יש לי הרגשה שבאיזור הזה יהיה לך Directory traversal . . . “הצעיר יותר, שיודע למצוא Directory traversal, ו”שד בזה” - יסתכל על המישהו המנוסה יותר ויגיד לו “איך אתה יודע?, מאיפה יש לך את התחושת בטן הזאת?” - וזה בדיוק הניסיון, שגורם לך להבין לאיפה לחלק את השעות . . . ואם אני כבר קופץ רגע לסוף, רק לשלב האחרון - אחרי שמצאנו, במהלך הפעילות, מצאנו Vulnerabilities . . .היועץ שם לו אותן בצד - ובשלב הסופי הוא כותב דוח שממפה את כל אותן בעיות, ואני אשמח עוד מעט להרחיב על מה נמצא בדוח ומה עושים איתו . . .(רן) כן . . . אז אני מניח שאיזשהו Sub-text שלא כל כך דיברנו עליו הוא שלך יש אולי איזושהי מגבלת זמן, אבל אתה יוצא מתוך נקודת הנחה שלפורץ אין מגבלת זמן . . . זאת אומרת, גם אם אין לו, כמובן, גישה ל-White-Box, אין לו גישה ל-Source-Code - או לפחות אנחנו מקווים שאין לו את הגישה הזאת, אם לא התכוונו לתת לו . . . .אבל כן יש לו הרבה מאוד זמן לשחק - אז הוא לא יודע אם יש Directory traversal או לא אז הוא פשוט מנסה, והוא לא יודע אם יש פה בעיה ב-WebSocket אז הוא פשוט מנסה - ולפורץ יש, נגיד, “אינסוף זמן”, אבל לך אין . . . יש סוף לזמן שלך, יש סוף לשעות שאותן אתה יכול להשקיע, לפי החוזה, ולכן אתה צריך לתעדף לפי סיכונים.רציתי לשאול - יש לנו בסך הכל הרבה נושאים שאנחנו רוצים לכסות והזמן קצר, כמו ב-Pen-Testing . . . - אז רציתי להתמקד על כמה דברים - ואחד הדברים המשמעותיים, אני חושב, ביותר בעולם של ה-Security activities זה ההתפתחות של שפות התכנות, זאת אומרת - אם בעבר פריצות טיפוסיות היו משתמשות ב-Buffer overflow ודריסות זכרון ודברים כאלה בשפות שהן פחות מנוהלות כגון C, היום השפות הן כבר הרבה יותר מנוהלות, ועדיין יש להן פגיעויות - אבל הן מסוג שונה.אז שפות שהן הרבה יותר מתקדמות, דוגמאת הגרסאות האחרונות של Java ו-TypeScript ו-Go ו-Rust מנהלות בצורה מאוד מאוד יפה את הזכרון שלהן, ויש להן לא מעט פיצ'רים של Security כבר Built-in בתוך השפה - אבל אני מנחש שיש להן פגיעויות אחרות . . .אז איך אתם ניגשים, נגיד, אם אתם לומדים שיש Code base שכתוב, לצורך העניין, ב-Go או ב-Rust או ב-TypeScript או בשפה מודרנית אחרת - האם אתם ניגשים לזה בצורה שונה, עם סט שונה של כלים או מתודולוגיה אחרת?(ארז) חד משמעית כן, כי בכל שפה יש את ה-Common Vulnerabilities שלה, או שאני אגיד את זה אחרת - לכל שפה יש את “המקומות האפלים האלה”, שמתכנת עלול “לירות לעצמו ברגל” . . .מה הכוונה? הסביבה והשיטה וכל ה-Community הרבה פעמים מעודד אותך לעבוד בצורה מסויימת, שהיא, בוא נגיד את זה ככה - קצת יותר מסוכנת מהממוצע, או יותר מסוכנת מבשפה אחרת . . . בעיקר בדברים דינאמיים או בדברים שאתה עושה בצורה שכזו, שנגיד שאולי בשפות אחרות לא היית עושהלמשל - בסביבות כמו Node.js ודומיהן, מאוד מאוד מעודדים אותך, יותר מבסביבות אחרות, להשתמש ב-Open Source Components . . . ו-Open Source Components, למרות שזה לא קוד שאתה כתבת, יש סבירות יותר גבוהה שבקומפוננטה (Component) שלא תפתח בעצמך, יהיה Vulnerability.גם לך תדע מאיפה הגיע ה-Package הזה ל-npm, ואתה מושך אותו ואלוהים יודע מה קורה איתו . . .אז יש סביבות שבהן ה-Package זה האיום המרכזי, ויש סביבות שבהן אתה יודע שהסביבה עצמה היא כזו שבה יש יותר סבירות לטעות . . .אגב, דיברת על זיכרון מנוהל וכו' - גם לפני 10 שנים, הרוב היה זיכרון מנוהל . . . בעיות כמו Buffer overflow ו-Format String ו-XSS וכו' - אלו בעיות שבאמת עוד בעבר הפסקנו להסתכל עליהן.זאת אומרת שהסבירות שאתה תמצא Buffer overflow באיזו Web-App הוא קלוש.לכן, רוב הבעיות מתמקדות בעיקר בבעיות טכניות - זה המונח, “בעיה טכנית”.“בעיה טכנית” זו בעיה כגון Directory traversal שהזכרתי קודם ו-SQL Injection ו-XSS ועוד כל מיני בעיות.ויש “בעיות לוגיות” . . . .(רן) כן, אני אוסיף לרשימה דברים שאני ראיתי - שימוש לא נכון ב-Encryption או בכל הספריות שקשורות ל-Encryption . . . (ארז) זה בעיות לוגיות . . . (רן) . . . ושימוש לא נכון באות'נטיקציה (Authentication) . . .(ארז) . . . לוגיות!(רן) אוקיי . . .(ארז) בדיוק . . . זה בדיוק מה שבאתי להגיד - לשם העולם הולך.אני אתן רקע - בעיות טכניות אלו בעיות שקל מאוד לפרמל (Formalize) אותן - לצורך העניין, אם אני עכשיו סורק את הקוד, קל לי, יחסית, לזהות או להגדיר Pattern של איך שנראה SQL Injectionתחשוב שמשהו רץ על הקוד, יש איזשהו Static Code Analysis, איזשהו מוצר של Security שעושה scanning, וידע לזהות איך נראה SQL Injection או XSS או כל בעיה אחרת . . .יש לזה Pattern בקוד, אני יכול להגדיר ולהגיד “אם אתה רואה קוד שיש בו Class של SQL Query ויש “הדבקת String-ים” בלה-בלה-בלה . . . “ - אני יכול לפרמל, לוגיקה כזו - “… - אז יש בעיה”.אלו בעיות טכניות.בעיות לוגיות, מהצד השני, הן בעיות יותר קשות - כי מכונה לא יכולה להסתכל על מכונה ולהכריע . . . זה הולך כל כך רחוק, עד כדי בעיית עצירה של Turing . . . זאת אומרת שאנחנו לא נוכל אף פעם, גם אם יש הרבה חברות AI שמספרות לנו כל מיני סיפורים - זה לא יקרה . . .בבעיות לוגיות, מכונה לא תוכל להכריע - זאת אומרת, יש דברים שהיא תוכל אולי, אני לא ראיתי . . . - אבל לדוגמא, הכי פשוטה:מי אמר שעל שדה מסויים, סופר-רגיש, צריך להיות Encryption? מי אמר שעל השדה הזה ב-Database או על השדה ההוא ב-Database צריך להיות Encryption? זה לא צריך להיות Encryption . . . מכונה לא תוכל להגיד לך את זה, בסדר?נכון שיהיה אפשר להסיק . . .(רן) אתה עושה את החלוקה בין “לוגיות” ל”טכניות” מנקודת הראות שלך, כ-Pen-Tester . . . דברים שבצורה טכנית, באופן טכני, אני יכול למצוא - ודברים שבאופן טכני אני לא יכול למצוא, ולכן אתה קורה לזה “לוגי”.אבל כמפתח, אני לא כל כך מודע לחלוקה הזאת . . . מבחינתי, הכל זה . . . לא יודע אם אפשר לקטלג את זה, אבל הכל זה בעיות לוגיות, כנראה . . . - זאת אומרת, מימוש לא נכון, הליכה כנגד ה-Best-Practices, בהרבה מקרים, או סתם חוסר הבנה או חוסר ידע שלי . . .(ארז) כן, תראה - הטרמינולוגיה של “בעיה טכנית” או “בעיה לוגית” היא לא טרמינולוגיה . . . זו טרמינולוגיה שבאה מעולם הPenetration Testing - זה מונח מקובל ונהוג לעשות את החלוקה הזאת.בשורה התחתונה - אתה צודק, מנקודת מבטו של מתכנת “הכל לוגי, כי הכל זה קוד שאני כותב”, ברור . . .אבל בהקשר של בעיה, כן - רוב הבעיות שאנחנו רואים היום הן בעיות כגון זה שלא שמת Encryption או שעשית Encryption לא נכון, או שלא עשיתי אות'וריזציה (Authorization), בסדר? לא עשית אות'וריזציה או שיכול להיות שהאות'וריזציה שלך לא טובה . . . .או למשל - מישהו שעושה Parameter Manipulation על איזה ערך, כן? . . . והוא נותן ערך Valid-י, זאת אומרת, תחשוב רגע שיש איזשהו ערך שאני מעביר - הערך עצמו, כערך, הוא אחלה ערך! הוא עובר RegExr, הכל תקין . . .אממה, לי אסור לשלוח אותו - הוא ה-CartID שלך, לא שלי, לדוגמא . . . . שזו בעיה לוגית, זו בעיה שמאוד קשה לעלות עליה מבחוץ - אתה ממש צריך להבין את ה-Business-Logic של המערכת.וזה, אגב, משהו שאומר שאיפשהו, ככל שהטכנולוגיה תתקדם ויהיו ל-Pen-Testing יותר שיטות ויותר כלים - תמיד אנחנו נצטרך Human בתמונה . . .(רן) אז נושא אחד שככה קצת נגעת בו מקודם, כשדיברנו על Node.js - הזכרנו קוד פתוח והזכרנו Package Managers, ורציתי קצת להכליל את זה ולדבר עוד כמה דקות על הנושא של Supply-Chain Attacks - התקפות על שרשרת האספקה.עכשיו, מי שמגיע מעולם התפעול מכיר שרשרת אספקה - זה אוניות, זה משאיות, זה מטוסים, זה מחסנים וכו' . . . . אבל מה, למעשה, זו שרשרת האספקה בעולם התוכנה? אז בעולם התוכנה, שרשרת האספקה כוללת כמה דברים - זה כולל את כל ה-Tool-ים שעוזרים לנו בסופו של דבר לכתוב את התוכנה ולדלבר (Deliver) אותה, אם זה IDE, אם זה ה-Package Manager, אם זה חבילות ה-Open-Source השונות, ה-CI, ה-Deployment System, ה-Docker ו-Kubernetes וכו' - כל מה שעוזר לנו בסופו של דבר - כל מה שהוא לא התוכנה שלנו, אבל עוזר לנו לייצר את התוכנה.ובזמן האחרון - טוב, אני לא יודע אם זה בזמן האחרון אבל שאולי זה רק עלה יותר למודעות בזמן האחרון - יש לא מעט התקפות על שרשרת האספקה הזאת, אם זה התקפה על ה-CI, אם זו התקפה על החבילות, Hijacking וכו' . . .איך זה משנה את עולם ה-Pen-Testing?(ארז) תראה, בשורה התחתונה אני אגיד שזה משהו שחלקית אנחנו . . . זאת אומרת, אפשר להתייחס אליו ב-Pen-Testing.ולמה אני אומר את זה? כי אם יש בעיה, כשהבעיה הזו היא, לצורך העניין, חשופה כלפי חוץ - אז אתה תראה אותה ב-Pen-Test, וזה לא משנה אם המתכנת טעה ועשה Bug של Security, שזה רוב המקרים, או אם המתכנת בכוונה הזריק וקטור לקוד - נדיר, אבל קורה . . . .או אם זה סוג של . . . מישהו אחר, נגיד, הכניס בכוונה Bug איפשהו - בסוף זה יצא כלפי חוץ, זאת אומרת - ב-Pen-Testing אתה אמור לזהות את הבעיות שקיימות.מה אתה לא תזהה ב-Pen-Testing? אם למשל מישהו החביא, איפשהו ב-Supply-Chain עמוק בפנים, איזשהו Backdoor שכזה . . . אין סיכוי שאתה תעלה עליו, אתה יודע . . .אתה לא יכול לחזות, למשל שאם אתה תוסיף איזה ערך מאוד-מאוד-מאוד מיוחד ל-Request - פתאום ה-Backdoor יתעורר . . . זה לא משהו, זה לא סביר שאתה תעלה על זה ב-Pen-Test.אגב - מאוד יהיה קשה לעלות על זה גם בשיטות אחרות.לכן Supply Chain אלו בעיות מאוד קשות . . . כי תחשוב רגע, הזכרת למשל אוניות ומחסנים וכאלו - בעולם ה-Software זה יותר באמת “מישהו החביא לי איזושהי הפתעה, עוד לפני שאני, כמתכנת, קימפלתי ל-Production בכלל, מישהו החביא הפתעה עמוק בתוך ה-Complier” . . . סתם דוגמא - בתוך ה-IDE החביאו לי איזושהי הפתעה, החביאו לי בתוך ה-Docker Image . . . תחשוב - אם אני מושך איזה איזשהו Docker Image, והוא כבר בפנים החביא לי הפתעה . . . הקוד שלי סבבה, פצצה - עבר Code Review, עבר Pen-Test - על הסביבה הרגילה . . . אבל כשהוא רץ על ה-Docker Image הזה, אני בבעיה.לא חסרות סיבות שכאלו, שבהן אתה אומר שיכול להיות שאיפשהו לאורך הדרך מישהו שתל לי איזה משהו - ולכן, בהקשר של Supply Chain, מאוד חשוב לשים לב שבאמת, זה מאוד טריוויאלי - שכל השרשרת מאובטחת.שאת ה- Package-ים אתה לוקח ממקום תקין, שאת הסביבה אתה מעלה נקי . . . Docker Image? אין בעיה, אבל אל תביא Docker Image שמישהו אחר אפה, בוא תאפה אתה . . . תעשה את ה-Buildיש בפנים Binaries מיוחדים? תקמפל אתה . . . וכמובן שים לב מאיפה אתה מושך את הקוד . . .היום זה גם מאוד קל, כי היום להרבה מאוד דברים יש Digital Signature - פעם לא היה לנו Digital Signature כמעט על כל דבר, והיום יש.היום אתה יכול לוודא שהחבילה הגיעה מה-Trusted source שאתה מצפה לו.היום אתה יכול לאמת חתימות של כמעט כל דבר שיש.אפילו היום אתה יכול - הנה דוגמא למשהו שפעם לא היה - CDN, בסדר? נהוג למשוך כל מיני Static content מ-CDNהיום זה כל כך טריויאלי . . . פעם הייתה שם את הכל אצלך, את כל ה-JavaScript-ים והכלהיום יש יכולת להגיד, אני בתור מפתח המערכת שלי - כשאני מושך External backend, כשאני מושך למשל jQuery ממקור חיצוני, אני לספק את החתימה שלו כחלק מה-HTML - לא הייתי יכול לעשות את זה בעבר.בעבר הייתי צריך למשוך JavaScript ולכניס אותו “לקודש הקודשים” - ל-Domain שלי, בתוך ה-Domain שלי, להכניס משהו מבחוץ שאין לי מושג מאיפה הוא בא, אין לי מושג האם מישהו שינה אותו מאיפה שמשכתי אותו וכו'היום אני יכול ממש לספק Hash עם חתימה של מה שאני מצפה לקבל - ואם ה-Browser יקבל Package לא מתאים הוא ידחה אותו, הוא לא יטען אותו - שזה נהדר.יש הרבה מאוד שיפורים מהסוג השזה, שפעם לא היו לנו - וזה אגב אחד הטריקים שאני ממליץ להשתמש בהם.(רן) זה באמת מביא אותי לשאלה הבא - אולי לא יהיה לנו זמן לדבר על ה-Report שאתם מייצרים, אבל האם, אחרי שמצאתם אוסף של Vulnerabilities - רגישויות, פגיעויות - האם אתם גם הולכים הלאה ומספקים בסופו של דבר פתרונות, או מיטיגציות (Mitigations) לאותן בעיות?(ארז) יש הפרדה בין עולם ה-Pen-Testing לעולם הייעוץ - זאת אומרת שכשאתה עושה Penetration Testing, יש לך Mission - וה-Mission שלך זה לבוא ולמצוא כמה שיותר בעיות ולהנגיש אותן, זה חלק מהמשימה.מה זה אומר להנגיש אותן? - זה אומר שאני צריך לקחת בחשבון שמי שקורה את הדוח הוא לא Penetration Tester, ואני לא יכול לדבר בשפה שלי . . .אני צריך להסביר לו את הבעיות, אני צריך להסביר לו איפה הבעיות . . .אני צריך לשים לב לא ליפול לטעות הנפוצה - שהוא יחשוב שהבעיה שנתתי לו היא רק בדוגמא מסויימת, ויתקן רק אותה . . .ואחד הדברים שחשוב מאוד להנגיש במסמך זה את ה-Mitigations . . .אז לשאלתך - כן, נהוג לתת Mitigations במסמך, להגיד איך ניתן לטפל בזה - אמרתי לך, סתם לדוגמא, שה-Encryption שלך לא טוב - אגב יש לזה שם, משחק מילים: En-crap-tion . . . אם אתה עושה En-crap-tion, וה-Encryption שלך לא טוב, אז אחד מהדברים שאני ארשום לך במסמך זה שהשתמשת, למשל, בהצפנה סימטרית מסוג . . . . וה-Encruption mode שלך הוא ECB - זה לא טוב, תחליף בבקשה ל-CBC, ויכול להיות שאני אפילו אתן לך את ה-Flag המתאים בשפה שלך, כי אני, נגיד, יודע באיזו שפה אתה עובד וואני אתן לך גם ממש דוגמת קוד שעובדת.זה החלק של הדוח, זה החלק של ה-Pen-Test - מי שמקבל דוח, צריך שיהיה לו את כל מה שצריך בשביל לתקן את זה.יש לקוחות ויש מקרים שבהם באים ואומרים “תשמע - בואו תסייעו לי גם ממש ליישם את ההמלצות”אבל הנחת הייסוד היא שלא - אתה לא חייב להישען עלינו בשביל זהמי שעושה Pen-Test אמור לקבל את כל המידע ואמור לקחת מישהו שמבין מספיק, מפתח נורמלי, שידע מה לעשות עם הדברים - וכל מפתח נורמלי יידע איך לעשות את המיטיגציות (Mitigations) בהתאם להנחיות שהוא קיבל.(רן) אוקיי, הזמן שלנו כבר קצר ואני עדיין מאוד סקרן, אז אני אבחר לעצמי עוד שאלה אחת וננסה לענות עליה - בעצם, היום הרבה מאוד שירותים נשענים על שירותי-צד-שלישי - אם זה לצורך, נגיד, Monitoring אז Datadog וכאלה, אם זה לצורך תשתיות אז AWS או GCP או Azure . . . זאת אומרת, הרבה מאוד הישענות על שירותי-צד-שלישי, והשאלה האם זה גם משהו שאתה לוקח בחשבון כשאתה בא לעשות Pen-Testing? זאת אומרת - לא רק את הקוד שאני כתבתי, אלא גם את כל השירותים האחרים שבהם אני משתמש ואולי ה-Data שאני שולח אליהם, ואולי הפגיעויות שלהם, עצמם . . . לצורך העניין יש Vulnerability ב-PagerDuty - איך זה הולך להשפיע עלי?(ארז) שאלה מצויינת . . . מה שאתה מדבר עליו, יש לו שם כללי בעולם שלנו: זה נקרא TCB, שזה Trusted Computing Baseזה בעצם אומר אילו דברים מבחינתך זה הבסיס, שכהנחת יסוד אתה אומר “את זה אני לא בודק” . . .לדוגמא - כשאתה עכשיו עושה Pen-Test לאיזה Web Application שכתוב ב-Node.js, אתה לא תלך ותבדוק את המערכת הפעלה שלו . . . למה? כי אתה אומר ש”הנחת היסוד שלי היא שהמערכת הפעלה שלו היא תקינה” . . .כמובן שאתה יכול לעשות Pen-Test על לראות שאין Vulnerabilities במערכת הפעלה, אבל באנלוגיה, נגיד - אני עכשיו עושה Pen-Test על איזשהו Web App, שפתאום משתמש בשירות צד-שלישי . . . נגיד שהוא משתמש עכשיו בשירות שליחת SMS של Twilio או לא יודע מה, משהו של צד שלישיאני לא הולך לעשות עכשיו Pen-Test על Twilio . . . מבחינתי, Twilio הוא בהנחת יסוד שלנו, והוא צד שלישי שהוא Secure.קודם כל - אני לא יכול ללכת עד אינסוף ולבדוק את כל הלוויינים סביבי . . . זוכר? זה משהו עסקי . . . דבר שני - חוקית, אני לא יכולדבר שלישי - גם אם הייתי יכול, הם היו אומרים לי “לך מפה” . . .דבר רביעי - תשמע, זו אחריות שלהם . . .[כל זה לא משנה אם הטלויזיה מאזינה . . . ]מה שכן עושים זה מסתכלים על ה-Interface, זאת אומרת - אם אני עכשיו עובד עם צד-שלישי, אז כן אני אסתכל - וזה כן דברים שמסתכלים עליהם- כן אני אסתכל שאם למשל אני עובד מולו, אז אני עובד עם HTTPS, לדוגמא.כי אני רוצה לוודא שה-Data עובר לשם כשהוא Encrypted בצורה נכונה.כלל נוסף - אני עובד מולו אז אני רוצה לעשות Server Authentication.זה Concern שלי, אני רוצה כשכשאני הולך לצד שלישי, לעשות אות'נתיקציה (Authentication) שלו, אני רוצה לוודא שכשאני עובד עם שירות צד-שלישי, אני רוצה לוודא שבאמת אני עובד איתו ולא עם איזה Man-in-the-Middle . . . .למשל, אחד הדברים שעולים ב-Pen-Test זה שבזמן הפיתוח, כיבו את ה-Certificate Validation . . . למה? כי בפיתוח לא היה לי Certificate של צד-שלישי כלשהו וביטלתי, עשיתי . . . . דרסתי את המתודה שעושה Certificate Validation, ואמרתי “ניתן True - עזוב אותי באמא'שך . . . פונקצית-עזוב'תי-באמא'שך . . . ”וכשבאים ל-Production - “וואלה מעולה - זה עובד!”, כי זה עבד גם מקודם . . . .אלא הם דברים שב-Pen-Test, למשל, כן בודקים אותם - כי כשמכניסים Man-in-the-Middle, ורואים שכשאני מגיש Certificate שהוא לא חתום ע”י ה-CA שאותו Client אמור לוודא, אז באמת אני מבין שיש בעיה . . . בקיצור - לא בודקים את הצד-שלישי, כן בודקים את האינטגרציה מולו ואת ה-Interface-ים מולו - מה נשלח? איך מאמתים אותו? כו' . . .(רן) אני מניח שבהקשר הזה, יש גם עניין של זליגה של מידע פרטי - אולי אם שלחת SMS, או שאתה שולח רק את הפרטים שאתה רוצה ולא בטעות מידע של מישהו אחר . . . (ארז) נכון, וברשותך אני אקח דוגמא מעולם ה-Mobile Apps - בעולם ה-Mobile Apps אתה רואה שפתאום, Out-of-the-blue . . . כאילו, זה בדיוק מהכיוון ההפוך, כן? . . . אם מקודם אמרתי שאני יודע שיש תקשורת לשרת מסויים, פתאום אני מזהה תקשורת שהולכת לאיזשהו שרת כלשהו, שאין לי מושג מי הוא, מאיפה הוא, מהו . . . ומסתבר שה-Vendor, ברוב נחמדותו, הוסיף בפנים לוגיקה של Monitoring ושל טלמטריה . . . ולפעמים זה נעשה אפילו בצורה זדונית.אגב, אחד מה-Side-effects של Pen-Test זה פתאום, במקרה, לזהות תקשורת שבכלל לא ידענו שהיא קיימת, שמגיעה מתוך איזשהו SDK שלקחנו והכנסנו פנימה . . . אנחנו רואים את זה מלא, וזה אגב אחד הדברים ש”על הדרך” פתאום אנחנו יכולים להאיר עליהם . . .לפעמים, אגב, זה לא עניין של Security - לפעמים אנחנו, על הדרך, רואים משהו שעוזר לצד השני והוא אומר “וואלה, לא ידעתי בכלל שדברים כאלה קורים . . . .”(רן) אז לדוגמא, יכול להיות מקרה שבו אתה מתקין SDK בתוך ה-Mobile-App שלך ובלי ידיעתך הוא שולח כל מיני אנליטיקות על ה-User שלך, אולי אפילו PII, זאת אומרת Personally Identifiable Information על ה-User-ים שלך, בלי שבכלל ידעת ובלי, כמובן, שהסכמת.(ארז) נכון - ופתאום אתה מגלה שאתה לא עומד ברגולציה . . . שבעצם אותו צד שלישי, אותו Package תמים, שכל מה שהוא אמור לעשות זה לספק לך איזשהו חישוב של משהו מסוייםפתאום אתה מגלה שהוא, ברוב חוצפתו, לוקח את אותו מידע של ה-End-user ושולח לשרת שלו . . . עכשיו - גם אם זה לא בצורה זדונית, גם אם הם צריכים את זה בשביל לשפר את המוצר שלהם או לבנות איזשהו מודל Data-Science כזה או אחר - אני בבעיה, אני כ-Vendorכי פתאום הוא גורם לי לא לעמוד ברגולציה שאני אמור לעמוד בה - בגלל שהוא שולח את הנתונים של הלקוחות שלי אליו . . . זה מסבך אותנו וכמובן שהרבה פעמים זה גם גובל בבעיות Security - אבל זה חלק מהדברים שעלולים למצוא ב-Pen-Test על הדרך.(רן) כן, ברוראז כמו שאמרנו קודם - זמננו קצר ואנחנו צריכים לסיים.אז תודה, ארז! היה כיף והיה מעניין - ותודה על העדכון, אני מקווה שניפגש שוב ולא בעוד 10 שנים . . . .אז עולם ה-Pen-Testing מתחדש, אני מניח, כל יום, וזה מרתק - וזהו. תודה!(ארז) בכיף - שמחתי מאוד לבוא, שמחתי מאוד לדבר, וכמובן שאם יש עוד נושאים מעניינים אז אני בכיף אבוא וארחיב עליהם, תמיד כיף לדבר ולספר ככה את מה שבסופו של דבר עובד בצד הזה, כי אני גם רואה שברגע שגם עולם הפיתוח רואה ומבין את השיקולים של ה-Pen-Test, בסוף זה נותן יכולת טובה יותר לבצע את הפעילות הזאת.תודה ארז, ולהתראות! האזנה נעימה ותודה רבה לעופר פורר על התמלול!
In the Security News for this week: Buffer overflows galore, how not to do Kerberos, no patches, no problem, all your IoTs belong to Kalay, the old pen test vs. vulnerability scan, application security and why you shouldn't do it on a shoe string budget, vulnerability disclosure miscommunication, tractor loads of vulnerabilities, The HolesWarm.......malware, T-Mobile breach, and All you need is....Love? No, next-generation identity and access management with zero-trust architecture is what you need!!! Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw707
This week, we jump straight Into the Security News for this week: Buffer overflows galore, how not to do Kerberos, no patches, no problem, all your IoTs belong to Kalay, the old pen test vs. vulnerability scan, application security and why you shouldn't do it on a shoe string budget, vulnerability disclosure miscommunication, tractor loads of vulnerabilities, The HolesWarm..malware, T-Mobile breach, and All you need is....Love? No, next-generation identity and access management with zero-trust architecture is what you need!!!! Next up, we have a pre-recorded interview featuring Qualys Researcher “Wheel”, who joined Lee and I to discuss Sequoia: A Local Privilege Escalation Vulnerability in Linux's Filesystem Layer!!! Lastly, a segment from Black Hat 2021 featuring Sonali Shah, Chief Product Officer at Invicti Security, all about Shifting Left, and how YOU can make it right! Show Notes: https://securityweekly.com/psw707 Segment Resources: https://blog.qualys.com/vulnerabilities-threat-research/2021/07/20/sequoia-a-local-privilege-escalation-vulnerability-in-linuxs-filesystem-layer-cve-2021-33909 Visit https://securityweekly.com/qualys to learn more about them! Visit https://securityweekly.com/netsparker to learn more about them! Visit https://www.securityweekly.com/psw for all the latest episodes! Visit https://securityweekly.com/acm to sign up for a demo or buy our AI Hunter! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly
In the Security News for this week: Buffer overflows galore, how not to do Kerberos, no patches, no problem, all your IoTs belong to Kalay, the old pen test vs. vulnerability scan, application security and why you shouldn't do it on a shoe string budget, vulnerability disclosure miscommunication, tractor loads of vulnerabilities, The HolesWarm.......malware, T-Mobile breach, and All you need is....Love? No, next-generation identity and access management with zero-trust architecture is what you need!!! Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw707
This week, we jump straight Into the Security News for this week: Buffer overflows galore, how not to do Kerberos, no patches, no problem, all your IoTs belong to Kalay, the old pen test vs. vulnerability scan, application security and why you shouldn't do it on a shoe string budget, vulnerability disclosure miscommunication, tractor loads of vulnerabilities, The HolesWarm..malware, T-Mobile breach, and All you need is....Love? No, next-generation identity and access management with zero-trust architecture is what you need!!!! Next up, we have a pre-recorded interview featuring Qualys Researcher “Wheel”, who joined Lee and I to discuss Sequoia: A Local Privilege Escalation Vulnerability in Linux's Filesystem Layer!!! Lastly, a segment from Black Hat 2021 featuring Sonali Shah, Chief Product Officer at Invicti Security, all about Shifting Left, and how YOU can make it right! Show Notes: https://securityweekly.com/psw707 Segment Resources: https://blog.qualys.com/vulnerabilities-threat-research/2021/07/20/sequoia-a-local-privilege-escalation-vulnerability-in-linuxs-filesystem-layer-cve-2021-33909 Visit https://securityweekly.com/qualysto learn more about them! Visit https://securityweekly.com/netsparkerto learn more about them! Visit https://www.securityweekly.com/pswfor all the latest episodes! Visit https://securityweekly.com/acmto sign up for a demo or buy our AI Hunter! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly
What do you call cyberspace in space? What is the evolution of cyber security in space? The expert Frank Pound, computer scientist, entrepreneur, founder and president of Astro Sec is the guest to clarify all things related to cyberspace. Space traffic control is discussed. Frank advised the Air Force, Space Force, and their contractors to help build the Hack-A-Sat competition which attracted teams from around the world to demonstrate their prowess in this blended space and cyber competition challenging them in everything from orbital dynamics to radio communications. 1:00 – Intro 2:10 – Bio Frank Pound 11:10 - Frank Pound talks about the democratization of technology 12:03 – The explosion of IOTs on the market, sensors, and rapid advancements on space launch 12:54 – The open-source hardware movement 18:04 – General excitement about space in 2019 and investment in safety 20:56 – Potential for a cascading effect making space travel impossible 23:37 – Frank Pound talks about safety measures 24:50 – A summary of Hack-A-Sat's latest works 27:22 – Alternatives to cyber security and safety in space 37:00 – Frank answers the question on how to do missions in space 40:16 – How to find out more about Frank Pound and Hack-A-Sat's competition, interviews and resources Hacker Valley Studio: Swag | LinkedIn | Twitter | Instagram | Email Ron & Chris | Website Frank Pound: Twitter | Website Support Hacker Valley Studio on Patreon Join our monthly mastermind group via Patreon Visit our friends and sponsor Panther Labs
I discuss quantum computing, blockchain, IOTs, VR/AR, Robotics, biometrics, serverless computing, 5G, Artificial intelligence, natural language processing --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
(0:00:00) Intro (0:04:00) Sarvesh's computer has been hacked (0:06:00) Amazon offers mental health services & App for US employees (0:15:00) Discussion about Amazon's history of internally testing before releasing to world (ie. AWS) (0:19:00) General discussion about mental health (0:22:00) Amazon's Ring is now largest civilian surveillance network in US (0:26:00) Alexa settings reveal possible plan for IoTs & more neighborhood surveillance features with Guard Plus (0:30:00) Could Ring cameras and upcoming drones be a voyeur's dream! (0:33:00) Snap to release new AR glasses (0:45:30) Rumble - surveillance, impact on people of colour, policing (0:50:30) Dating apps add vaccination badges & special benefits to those who get poked (0:54:30) Elon Musk announced potential factories in Russia (0:56:15) Conspiracy - Elon maybe trying to escape Dogecoin! Pitting Russia vs China for his love! (0:59:00) Netflix possibly expanding into gaming (1:05:00) China's internet watchdog CAC has charged apps with privacy issues & given 15 days to fix (1:08:00) Apple Watch Assistive Touch (1:09:00) Apple's In-Air Sensing Strip patents (1:16:45) Discussion - eye tracking tech (1:25:00) Facebook announces Social Live Stream Shopping (1:26:50) Online Shopping Trends from Asia: social e-commerce platform Pinduoduo, rewards from mobile gaming... (1:31:30) Indonesia's e-commerce giant Tokopedia turning K-pop concerts into online shopping events (1:34:45) Discussion - AR VR in fashion (1:36:30) Pope Mobile is going electric. No word on the autonomous level or if God Speed is faster than plaid. (1:39:15) Kids and games (1:40:15) Aging in home (1:43:00) Medical sensors as small as grain of salt. (1:44:00) Austria's Crypto Stamp with NFC chip and blockchain based tracking (1:44:30) Discussion - tracking devices and supply chains (1:52:30) Breaking News - Twitter will let users with 1000 or more followers to schedule ticketed spaces (1:55:00) Apple will still take its 30% cut from Twitter-Stripe transactions in Ticketed Spaces (2:01:00) Microsoft and Apple continue to wage war on Right-to-Repair (2:02:00) Discussion - Right to Repair, Importance of Clubhouse for global discourse (2:31:00) Food delivery app in Iraq won competition in Dubai and raised funding (2:36:00) Google DeepMind tried and failed to get more autonomy from Alphabet for years (2:38:00) Snapchat has grown 100% year-over-year for the last 5 quarters in India (2:41:00) Green discussion - Colorado green jobs, Artic getting hotter, agencies around world recommend cutting back fossil fuel use (2:46:00) European power grid nearly collapsed due to coal plant (2:49:00) Black Fungus' disease in India (2:59:00) BREAKING NEWS - TECH NEWS AROUND THE WORLD HITS 1K Twitter followers
ถ้าพูดถึงโซลาร์เซลล์ เรามักจะคิดถึงแผงวงจรที่ตั้งอยู่กลางแจ้ง เพื่อเก็บพลังงานจากแสงอาทิตย์แล้วเปลี่ยนให้เป็นพลังงานไฟฟ้า และคุ้นตากับแผงโซลาร์เซลล์ที่ติดตั้งใกล้กับอาคารที่อยู่อาศัย บนหลังคาสิ่งปลูกสร้างต่าง ๆ หรือทุ่งโซลาร์เซลล์กลางแจ้งขนาดใหญ่ แต่รู้หรือไม่ว่าขณะนี้มีการพัฒนาเทคโนโลยีโซลาร์เซลล์ให้ล้ำหน้าไปอีกขั้นด้วยการปรับให้วัสดุที่นำมาสร้างเป็นแผงวงจรสามารถเก็บพลังงานจากแสงที่มีปริมาณน้อยกว่า อย่างเช่นแสงภายในอาคารได้ด้วย โซลาร์เซลล์ในอาคารมีแนวคิดการทำงานแบบไหน มีประโยชน์ และสามารถตอบโจทย์การเข้าสู่ยุค Internet of Things (IOTs) ได้อย่างไร รู้จัก “โซลาร์เซลล์ในอาคาร” ไปกับ ผู้ช่วยศาสตราจารย์ ดร.พงศกร กาญจนบุษย์ อาจารย์ประจำกลุ่มสาขาวิชาวัสดุศาสตร์และนวัตกรรมวัสดุ ผู้ได้รับทุนการวิจัยด้านวิทยาศาสตร์และเทคโนโลยี จากมูลนิธิโทเรเพื่อการส่งเสริมวิทยาศาสตร์ ประเทศไทย 01:34 มีโซลาร์เซลล์แบบอื่นนอกจากโซลาร์เซลล์กลางแจ้งที่เราคุ้นเคยไหม ผู้ช่วยศาสตราจารย์ ดร.พงศกร กาญจนบุษย์ อธิบายถึงโซลาร์เซลล์นอกอาคารที่ติดตั้งกลางแจ้งอยู่ทั่วไป และโซลาร์เซลล์ในอาคาร พร้อมบอกถึงความแตกต่างของโซลาร์เซลล์ทั้ง 2 ชนิด 02:30 ทำไมถึงเลือกวิจัยเรื่องโซลาร์เซลล์ที่ใช้ในอาคาร ผู้ช่วยศาสตราจารย์ ดร.พงศกร กาญจนบุษย์ เล่าถึงความเป็นมา และแนวคิดในการพัฒนาโซลาร์เซลล์ในอาคารที่รองรับการเข้าสู่ยุคของ Internet of Things (IOTs) โดยใช้พลังงานอย่างคุ้มค่าอีกด้วย 04:18 ในฐานะที่ได้รับทุนการวิจัยด้านวิทยาศาสตร์และเทคโนโลยีจากมูลนิธิโทเรเพื่อการส่งเสริมวิทยาศาสตร์ ประเทศไทย อะไรคือเป้าหมายของการวิจัยเทคโนโลยีโซลาร์เซลล์สำหรับใช้ในอาคาร ผู้ช่วยศาสตราจารย์ ดร.พงศกร … Continue reading "โซลาร์เซลล์ในอาคาร : พลังงานตอบโจทย์ Internet of Things (IOTs)" The post โซลาร์เซลล์ในอาคาร : พลังงานตอบโจทย์ Internet of Things (IOTs) appeared first on SIMPLE SCIENCE.
It's Too Late Episode 101: COVID-iots On this week's episode of It's Too Late, Alan and Blake talk about the changes resulting from Covid-19, the upcoming "relief bill" from Congress, and enjoy music from Delmer and the V8's You can catch live streams of our episodes in video as they debut on Wednesdays at 8pm central, 9pm eastern time at facebook.com/alanmosleytv and see our entire library at www.youtube.com/alanmosleytv and https://odysee.com/@alanmosleytv:6 Our show is now available as an audio podcast on all your favorite podcasting platforms via https://anchor.fm/alan-mosley You can support the show by subscribing to our Patreon at https://www.patreon.com/alanmosley https://www.twitter.com/alanmosleytv http://www.alanmosley.tv --- Support this podcast: https://anchor.fm/alanmosleytv/support
On this episode, we spoke with the Head of Innovation and Uplift at Vodafone, Julia Doll. Julia talks on the Internet of Things (IoT) and how companies can leverage to improve on the experience for customers. She also shared opportunities for partnership with startups that are offering services or products around the use of IoTs. This episode promises to be a very informative one. Do enjoy!Notes:Julia Doll and her team at Vodafone are always looking out for entrepreneurs with promising ideas seeking to leverage the IoT technology to scale. Reach out to Julia at linkedin.com/in/julia-doll-1562a616Connect: Write to us dip@dodo.ng, you can also follow us on Twitter: @DODO_Nigeria
Hello! This episode is a true homecoming in that I actually recorded it from home. Yay! WARNING!!! WARNING!!! This episode contains a ton of singing. If you don't like singing, do not listen!!! With that said, I wanted to follow up on part 1 and 2 of this series and share some additional cool tools that others have told me about in regards to securing and monitoring all your ioTs! Home Assistant - is described on its Wikipedia page as "a free and open-source home automation software designed to be the central control system in a smart home or smart house." You can quickly grab the HA image and dump it on an SD card with Balena Etcher and be up and running in minutes. I found HA a bit overkill/complicated for my needs, but my pal Hackernovice (on 7MS Slack) says this video demonstrates why he really loves it. Prometheus, recommended by our pal Mojodojo101, is "a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true." I found a great RPi install guide that will help you get it up and running in a snap. I love the capabilitiesand possibilities of Prometheus, but much like Home Assistant, it quickly got to "more than I need" territory. The final thing we talk about today is trying to answer this question: with so many of my ioTs tied to some cloud app/service, how do I keep these accounts themselves as secure as possible? Songs sung in this episode include: Follow Through by Gavin DeGraw Livin' on a Prayer The Look that Says You Love Me (Brian Johnson) Goodness of God
As smartphone and connected device technology continue to evolve, mobile carriers’ roaming reporting requirements are becoming more complex. Broader data analytics are needed to evolve reporting capabilities for newer and dynamic wholesale business models such as IoT and M2M usage. Too often roaming reporting tools focus on billing for megabytes and minutes rather than roaming behaviors, possibly causing operators to expend network resources and then lose out on potential revenue. Nina Le Richardson In this podcast, Nina Le Richardson Director of Product at Transaction Network Services (TNS) discusses the complexities that go along with 5G and network slicing. Nina and Doug also touch on complex wholesale IoTs that need flexibility outside of existing standards. We also learn why it's important for telecom operators to turn to a financial clearing house that will provide data visibility and optimize roaming partner relationships and enable evolution of charging beyond today’s standards.
The craziness of the Wuhan CHINESE Cornavirus Follow on Twitter -@cantcancelus also, Log into Anchor.fm and RECORD A MESSAGE FOR US!!! It may be used on the next episode! --- Send in a voice message: https://podcasters.spotify.com/pod/show/alreadycanceled/message Support this podcast: https://podcasters.spotify.com/pod/show/alreadycanceled/support
Latest on IoTs: Dedicated Fascinating IoT Operating System is Here; Edge Computing on the Rise, changing data storage process and interaction with Cloud
ใครจะไปคิดว่าจะมีแบรนด์จีนที่อายุไม่ถึง 10 ปี ลุกขึ้นมาสู้ Apple ได้ … Xiaomi แบรนด์นี้ ทีเด็ดอย่างไร ?Invescope Podcast จะชวนคุณมาส่องโอกาสการลงทุนของ Xiaomi กันค่ะ- Xiaomi ทำมาหากินอะไรกันแน่?Lei Jun ผู้ก่อตั้งบอกว่า Xiaomi ไม่ได้เป็นบริษัทขาย Hardware แต่คือ Internet Company- ปัจจุบัน Xiaomi มีสัดส่วนรายได้ จาก Smart Phone 62% สินค้า IoTs and Lifestyle Products 28% รายได้จาก Internet Services 10% แต่รายได้จาก Internet Services คิดเป็นกำไรกว่า 60% ของกำไรทั้งหมด- โอกาสในการเติบโตของ Xiaomi ที่น่าจับตา คือ การเติบโตด้านยอดขายของสินค้า IoTs โดยในไตรมาสแรกของปี 2018 Xiaomi เป็นผู้นำในตลาด IoTs โลกโดยมีส่วนแบ่งการตลาดอยู่ที่ 1.9% ตามด้วย Amazon 1.2% Apple 1% Google 0.9% Samsung 0.8%- Xiaomi เน้นขายสินค้าคุณภาพดี ราคาถูก แต่ไปหารายได้จากค่าโฆษณา และ บริการอินเทอร์เน็ตบน Ecosystem เอา- Xiaomi กำลังออกไปเติบโตในตลาดต่างประเทศ แต่โมเดลของการหารายได้จะสามารถ Copy & Paste ความสำเร็จที่ใช้ในตลาดจีนออกไปได้เลยหรือไม่ นี่คือสิ่งที่ต้องจับตาHost: บอล ภาคย์ภูมิ ศิริหงษ์ทอง กรรมการสมาคมนักลงทุนเน้นคุณค่า @คนสร้างเวลา by Road to Billion และ ทีน่า สุภัททกิต เจตทวีกิจ @Made in Tena
När Internet of Things finns överallt. Att leva med IoT runtomkring oss. Magnus Unemyr har arbetat i IoTs historia; de senaste 25 åren med internationell teknisk marknadsföring - i programvaruindustrin, med mikroprocessorer och kiselchip. OM AVSNITTETMålgrupp: Konsult, projektledare, beslutfattare, data science, intresse för grunderna.Lär dig: Vad är IoT, varför ska jag använda IoT, framtiden för IoT, datainsamling LYSSNA via Spotify, Google Podcasts, Apple Podcasts Tidigare kallades det inbyggda system - embedded systems - elektronik som byggs in i andra system för att styra och övervaka dem. De finns runt omkring oss: ABS-bromsarna i bilen, badrumsvågen som fick en digital display. Sedan kom internet och vi började styra systemen via remote. Tända och släcka lampor via mobilen. Eller att produkterna skickade data till molnet och varandra. Vindkraftverk skickar idag driftsinformation som kan analyseras med AI och på så sätt upptäcka samma datamönster som ett tidigare vindkraftverk har visat en vecka innan det gick sönder. På så sätt kan kraftverket servas innan driftstopp med stora utgifter som följd. Delar av jordbrukssektorn ligger långt fram: man använder sensorer och olika AI-tillämpningar, självkörande traktorer och satellitkameror för att optimera avkastningen av varje kvadratmeter på åkern. Fastigheter och boende är ytterligare ett område där IoT gör nytta: SMHIs väderprognos kan styra elementen eller tvättmaskinen som beställer tvättmedel automatiskt. Magnus öser på med bra exempel och tips på hur man kommer igång! Magnus Unemyr Jonas Jaani (18:52) Magnus Unemyr på Linkedin Magnus hemsida: www.unemyr.com https://www.amazon.com/Internet-Things-Industrial-Revolution-predictive-ebook/dp/B077RLMGSW/ref=sr_1_2?qid=1564921840&refinements=p_27%3AMagnus+Unemyr&s=digital-text&sr=1-2&text=Magnus+Unemyr Magnus bok om Internet of Things går att köpa via Amazon. Länk ovan. Mer avsnitt om IoT från Effekten - digitaliseringens podcast Säker IoT, ljud och wow i framtiden (avsnitt 59) Machine Learning, IoT och säker säkerhet? (avsnitt 45)
The reason the digital twin, or software-defined product, as I prefer to call it, is the most important IoT tech, is because it's the basis for the unique functionality possible from using the Internet of Things. Going beyond smarts or connectivity, it is this unique functionality that creates value, enough value that results in a profitable IoT product. In this episode of the IoT Business Show, the second in a series on the digital twin, I speak with Arnulf Hagen about the underlying models of the digital twin, and the high-level ways you can use them to create value. Read the rest of the show analysis notes including the transcripts at: http://bit.ly/IoTPodcast85notes This show is brought to you by DIGITAL OPERATING PARTNERS Related links you may find useful: Season 1: Episodes and show notes Season 1 book: IoT Inc Season 2: Episodes and show notes Season 2 book: The Private Equity Digital Operating Partner Training: Digital transformation certification
Dans ce nouvel épisode de Tech Café Domotique, on vous parle des caméras connectées : pour quoi faire, quels modèles, à quel prix ? Actualité Suivez la crème de notre veille sur FlipboardRetours sur la Google I/O : Google Nest Hub Max (pas en France… pourquoi donc ?)Et l’arrivé enfin du Google Nest Hubplus de protocole Nest Home arret du pont philips HUE V1 (le rond pas le carré Jeedom : Déploiement des plug-ins officiels pour Google Home et Alexa (abonnement) Dossier : les caméras Un point souvent primordial dans la domotique, ce sont les caméras de surveillance. Véritable outil de surveillance de sa maison à distance, elles font partie intégrante aujourd’hui d’une installation domotique. Dans les prémisses de la domotique, nous n’avions que des caméras imposantes, voyantes, nécessitant du câblage, et des outils ou logiciel type poste de contrôle de supermarché. Aujourd’hui avec la généralisation du wifi, l’amélioration de la qualité (du 1080HD) et résolution et la généralisation des IOTs, il est très facile, peu coûteux de mettre en place tout type de caméra pour sécuriser son domicile mais pas que ! Certaines caméras aujourd’hui peuvent s’alimenter juste par le PoE (48V qui passe par le cable reseaux en plus des données) si branchées en filaire. Sinon de plus en plus de caméras sont complètement autonomes, sur batterie ou panneau solaire voir même sans connexion internet (via carte SIM). Usage dans le domicile ou à l’extérieur. Possibilité d’écouter ce qui se passe ou d’utiliser, selon les modèles, un haut parleur pour parler à traver la caméra. Certains modèles permettent de sélectionner une zone à surveiller avec notifications et enregistrement de la vidéo si mouvement dans la zone. Utilisation : surveillance à distance (domicile, commerce, maison de vacances…..)détecteur de présence identification des personnes (avec les algorithmes de reconnaissance)surveillance des animauxdissuasion surveillance des enfants, bébé avec camera type babyphone (je suis pas fan...mais ca existe)Sextape…..(nan je déconne ! :-) ) Les marques assez répandues et réputées pour leur fiabilité : NETATMO NETATMO Welcome, cam d’intérieur, reconnaissance des personnes, assez jolie sans un look de caméra “classique”) (perso je pense à m’équiper)NETATMO Presence , cam d’extérieure + projecteur led en cas de détection. Permet de différencier les types de détections, personnes, véhicules ou animaux. Intégration dans jeedom ou autre système de domotique, compatible homekit, donc très appréciable pour créer des scénarios évolués en cas de détection (allumer d’autres lumières/équipements dans la maison pour simuler une présence en cas d’absence….) Point négatifs, dépend des serveurs netatmo, relativement fiable mais au pire en local ça fonctionne. ARLO : Go, pro, pro2…. sans fil, possibilité de mettre une sim si pas de réseau, parfait pour un lieu non équipé d’internet type maison de vacances. (Perso, nous avons une maison de famille, sans connexion internet, ca serait l’ideal) DLINK : fonctionne bien, perso j’ai une bonne vieille dlink d’intérieur, 360°, pas très cher, qualité moyenne, mais ca marche et intégrable dans Jeedom. XIAOMI : que l’on connait pour son matos “lowcost” mais de bonne qualité, ils ont évidemment sorti plusieurs modèles de caméra. J’en ai testé plusieurs, ca fonctionne bien mais on dépend des serveurs chinois, quoique pour certains modèles on commence à pouvoir avoir de la compatibilité sur les serveurs européens, mais on reste toujours dépendants de serveurs externes. Bien sur il existe des hacks pour flasher le firmware et les transforme en simple caméra IP que l’on peut intégrer à des applications de gestion de caméra IP ou dans des systèmes de domotique type Jeedom. Attention bien vérifier pour ca quel modèle est compatible… CANARY : je crois les premières compatibles Homekit ? Reolink : sans fil, pas trop cher,
Why IoTs have created a security crisis and strained the communications infrastructure along the way. By Acreto IoT Security. 5G is coming! 5G is coming! But in the 4G LTE era where access is lightning fast, what is driving the push for 5G? 4G networks is a technology from the 2000's with one primary intent -- to enable mobile devices to take advantage of apps. In order for the apps, app stores, streaming and other services to be successful, mobile devices need to just plain work. This means they must work transparently, reliably and consistently for users to interface and interact with their apps and content. 4G solved the problem with 2G, which was data unusable, and 3G, that at best was used for email and some browsing in a pinch. To that extent, it has been a resounding success. However, connected devices have seeped into everyday life in a low-key and transparent way. So much so that the prevailing industry mantra is that "IoTs are coming". In reality, IoTs arrived long ago. Today, mobile phones are ubiquitous. So ubiquitous that the mobile phone market has all but saturated. However, IoTs that are perceived to be "coming" number twice that of mobile phones today (16 billion vs. 8 billion). Just think about how many smart devices are in your personal life already. All the smart TVs, smart thermostats, smart door locks and video doorbells, and more. Today, some version of anything and everything comes with an IP address. Tomorrow, everythingwill just be assumed to have an IP address. IoTs are used for measurement, reporting, monitoring, content dissemination, cost management or performing a variety of functions. And in many instances, technologies are IoT enabled due to plain old peer pressure. Everybody else is connected and we have to keep up with the Kardashians. Today, things that matter are connected - and there are a lot of things that matter. And we are well on our way on the trajectory for “connected everything” to be the standard. The exponential growth of connected devices has strained our communications infrastructure beyond its breaking point. This has driven the complete exhaustion of IPv4 addresses, which has forced unwilling network operators to fast-track transition to IPv6. Moreover, network operators have realized that much like IPv4, the 4G LTE network is cracking under the burden of connected devices. In reality, 4G just can't keep up with the scale trajectory and performance demands of IoT technologies. One of the key factors for 4G is that it is not decentralized enough. As decentralized as 4G networks are, they are still too centralized for the continuing increase in the volume of IoTs. There are three missing infrastructure elements that have to mature in order to fully support the scale, form and function of 21st century Internetwork of Everything. First, Scale - Comparatively, enterprise technologies are like a gorilla, emphasizing static tools, however, IoTs are like a swarm of bees. Completely manageable in small quantities, overwhelming in medium quantities and suffocating at full scale. Second, Form - In comparison to autonomous and network-centric technologies, IoTs are distributed and operate on many different public and private networks with dependencies on remote third-party operated applications and management. Third, Function - Today's standards-based technologies can be used in a variety of roles. Inversely, connected technologies are often small and resource limited, single-function devices that perform micro-functions. Connected devices, IoTs, cloud-enabled technologies or, whichever other name they may be referred to as, operate at a radically different scale, with radically different form and function characteristics. Ultimately, they demand a radically different technology infrastructure altogether. First, let’s talk about Addressing. The Internetwork of Everything requires each and every device, server, cloud, desktop and anything else that makes up the Internet – no matter how small – to have a unique identity. Today we primarily use the IPv4 addressing scheme. IPv4 has a maximum capacity of 4.2 billion addresses (4,294,967,296 to be exact). However, consider that we have over 8 billion mobile phones alone, and another 16 billion IoTs in use today, not to mention all the computers. The world has turned to tricks like Network Address Translation (NAT) in order to compensate, but these are just band-aids that are currently straining at the seams. IPv6 has been around since 1994 and in contrast to IPv4's 4 billion addresses, it sports 3.4 x 10 to the 38th power addresses – or 340 undecillion, 282 decillion, 366 nonillion, 920 octillion, 938 septillion, 463 sextillion, 463 quintillion, 374 quadrillion, 607 trillion, 431 billion, 768 million, 211 thousand and 456, to be exact. Its support for the next generation of IP addresses is adequate for the massive scale of IoTs – but, this also makes it more complex to configure. Many technologists have not had the "muscle memory" experience they have developed with IPv4. However, there are no IPv4 addresses left. Because of this, technologists are pushing to implement IPv6 on all their networks. All the major players have already fully implemented IPv6. Anecdotally, IPv6 is said to have as many IP addresses as we have grains of sand on the earth, which should serve us well in supporting the massive expansion of IoTs to near 50 billion in the next few years. Next, let’s talk about 5G Networks. 5G, as its name implies, is the 5th Generation of mobile networks. It has several advantages over previous generations of mobile network tech including scale, performance, and availability as well as demands on its constituent devices. Believe it or not, the highly decentralized 4G/LTE networks are not decentralized enough to support IoT and connected device platforms. It all comes down to density. The sheer number of IoTs are driving a level of density that can best be described by an "IoTs per square foot" model compared to today's devices per base station cell area. Making some broad, yet reasonable, assumptions, the average 4G/LTE cell tower today supports an area from a few miles up to 10 square miles. Each cell tower is supporting several thousand connections at up to one gigabit per second of data throughput. The number of mobile phones and IoTs in any cell area is starting to outpace the maximum connection or bandwidth capacity of the towers. At this rate it won't be long until portions of the infrastructure are fully saturated. Another factor that needs to be addressed is frequency spectrums. Currently, most mobile networks operate within the 700Mhz (Megahertz) to sub 3.0Ghz (Gigahertz) frequency spectrum. This sub 3.0Ghz spectrum is also becoming saturated, and will soon not be able to support the spectrum needed to support the volume of connected devices. This though, is where 5G networks really shine. 5G operates using a greater number of cell towers with smaller coverage areas each with the capability to support a greater number of devices. 5G also operates at much higher frequency ranges – from 3Ghz to 30Ghz. The additional range buys much more capacity for existing carriers as well as providing more operating room for additional more nuanced carrier networks. More carriers means more competition driving lower prices and more specialized service providers supporting specialty technologies. There is also more capacity and intelligence built into 5G. It uses cognitive techniques to distinguish between mobile and static devices to determine the best methods for content delivery to each network subscriber. 5G offers robust performance that meets or beats network bandwidth only available via fiber optic networks today. 5G has been tested in a lab up to an astonishing 1Tbps (Terabit per second) while still maintaining a real-world practical performance of 10 to 50Gbps. 5G's scale, capacity and performance is a game-changer. Finally, let’s talk about IoT Security. Aside from adequately scalable addressing and communications infrastructure, securing all of these distributed and diverse platforms that use them is another challenge that has to be overcome. Realistically, the combination of 1) unique identity for every individual technology that IPv6 provides, 2) the enhanced communications capacities and capabilities of 5G along with 3) the support for many to many communications that the combination of IPv6 and 5G offer, makes security not just important, but an imperative necessity. Today's security models are not adequate for the new generation of infrastructure. The challenge is that a whole new security model is necessary to support the IPv6 / 5G new generation of communications. On-device security is not viable because the sheer volume and large variety of unique and purpose-built technologies that need to be secured create an uncontrollable hyper-fragmented jumble of security tools. This creates a patchwork quilt of security tools that organizations have to acquire, implement, integrate, operationalize, manage, troubleshoot and refresh. A complete non-starter! Network security tools just don't support mobile and distributed technologies -- the very thing that 5G enables. This is like trying to fit a square peg in the security round hole. Then there are the cloud-based IoT security companies. Securing distributed platforms from the cloud is very viable, except that almost all IoT security cloud plays are what is referred to as "You're Screwed" technologies. They are notification oriented technologies that collect logs from devices and analyze them to determine malicious behavior. Once malicious behavior is detected, they notify administrators who have to manually respond to each incident. This approach is reactive and not sustainable at scale. The Future of IPv6, 5G and IoT Security. IPv6, 5G Networks and IoT Security are the critical trio that have to work cohesively and effectively at scale to serve as the enablement platforms for a more prolific use of Internet-of-Things. A shortcoming in any one of these areas translates to shortcomings in the overall solution. Today, IPv6 is well established and though not ubiquitous, it's close, and there is clarity on how to get it there. 5G is very much well on its way and the telcos have already started their 5G rollouts. Security still remains an unanswered challenge. Acreto recognizes the weakness in today's available security options and has developed a platform from the ground up to work hand-in-hand with IPv6 and 5G networks to empower and enable the Internet-of-Everything. Learn more about Acreto's platform on our website here. Also on our website, you can find links to the American Registry of Internet Numbers' (ARIN) notification to network providers of IPv4 address exhaustion, as well as another letter on how to deal with IP address depletion from the Number Resource Organization (NRO). Learn more or read online by visiting our web site: Acreto.io — On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto IoT Security podcast. You can get it from Apple – Google or your favorite podcast app. About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Why We Did This – Facebook’s New Product: You. In a number of confidential strategy sessions with the Acreto Advisory team, led by Bob Flores, former CTO of the CIA, we set out to identify a number of potential mid to long-term threats that we should monitor. In studying the challenges that come with securing and adopting IoT technologies, and based on the complexities of how they operate and the dependency model that is established sociologically, we realized that Facebook, Google, and other similar tech giants are starved for data points. “It used to be that analysis of large amounts of data was limited to the biological capacity of the person. Computers didn’t used to have the processing power nor the algorithm and data sciences that they do today. Now, that’s not the case. The fact of the matter is that all these social media companies are data-starved. The more data points they have, the more they can absorb. There is no overload capacity for these social giants.” Babak Pasdar, CEO and CTO of Acreto Given recent events, and since we had one of the foremost experts in data collection in the world with us, when conversation turned to Facebook, we honed in on their data collection platform, where they are now and where they are heading in the future. We uncovered enough in that meeting to warrant a deeper dive into the Facebook machine. We studied the company, their practices, their history, their technology and even the psychology of its management team. We uncovered a lot of information and the more we uncovered, it made us want dig more. Through extensive research exploring investments, patents, acquisitions, market positioning and even management’s comments, we uncovered data that we thought was concerning. Pasdar explains, “We first became professionally interested in Facebook when we realized they have pinned their strategic future on IoTs. Where once Facebook’s information sources were limited to a handful of devices like computers and phones, with IoT integration they can collect much more granular data from hundreds if not thousands of sources.” Part of what makes addressing this challenge difficult is that the social media companies have features and functions that people want, and that they have built social environments that have become 21st century meeting grounds. These platforms are where the global community meets. All of the data points that IoT devices represent are a factor that can be difficult to overcome because there are these functionalities that may be highly desired or necessary for the social media perspective as it relates to people and our attitude towards ‘connecting’ with others. It’s really an all or nothing thing to have these features. What we’re doing, first and foremost, is identifying the problem. We are also offering organizations and consumers a balanced choice so that they can share the information they want to share, they can utilize the services of the platform in the granular way they desire to share or engage, and they are empowered and able to not give away the data that they want to protect or keep private. Facebook has proven it can be a kingmaker. Despite the company’s public relations lines, it’s clear that every party and every politician, for any seat, will engage in Facebook hacking. We define Facebook hacking as utilizing publicly available resources, along with coercion and manipulation of people, technologies and process to gain advantages. Advantages that can be for a cause, God, pocket book, or country. Facebook hacking is not just limited to politicians, but also extends to adversaries including those who wish physical and economic harm upon others. The stage has been set for compromising and manipulating entire communities. When thinking about securing IoT devices, we think like hackers do. How do we break it or steal it? How do we manipulate it or prevent it from functioning? How do we destroy it? These are the questions we can ask. Hacking is not direct or simple. Many times, hacking involves a complex orchestration of multiple components that typically has many permutations. When thinking through this, we realized first, how integral IoT devices are to social media, and second, the impact they have on privacy and on how we live our lives. If Facebook and Google can know as much about you as they do today with just a handful of devices such as your computer, your phone, or your watch, picture how much they would know about you and how they could manipulate you – and how they could manipulate societies, economies, or even democracies – when they have thousands of highly granular data points for each individual they track. Facebook’s reach is astounding. The organization collects a constant stream of data from one-third of the world’s population, and have their roots nestled in half of the world’s web sites. In Acreto’s Facebook Dossier, the team makes the case for Facebook as spyware and a personal information trafficker. Along with the dossier, Acreto is announcing new technology specifically designed to protect and prevent direct and indirect data leaks to Facebook and other data collection platforms such as Google, among others. Facebook’s New Product: You. Overall, the dossier explains how Facebook is intrusive for users and non-users alike. Most notably of recent events, the Cambridge Analytica scandal revealed a vast, deeply intrusive analytics manipulation with Facebook at its core. The extraordinary amount of private data collected from Facebook was used to target conservatives during the 2016 US presidential election. The information gathered from multiple testimonies to US and European legislators and regulators shed light on Facebook’s IoT strategy and sets the stage for intrusion of privacy of historic proportions. Nothing is more illuminating about Facebook’s strategy of data collection than their recent acquisition of Onavo, dubbed a “mobile data analytics company”, but in actuality, a ‘man-in-the-middle’ masquerade to collect, store and analyze all user communications for Facebook’s use, benefit, and profit. Facebook came, Facebook saw… and Facebook continues to conquer: this time, your IoT devices. “Cambridge Analytica is the canary in the coal mine to a new Cold War emerging online. Soon the so-called ‘Internet of Things’ will become the norm in American households. Algorithms will soon be driving our cars and organising our lives. This is not just about technology today, we have to seriously consider the implications for tomorrow. To put it bluntly, we risk walking into the future blind and unprepared.” Christopher Wiley, Cambridge Analytica whistleblower Cambridge Analytica and its parent company, SCL Elections, used a suite of political psyops tools in more than 200 elections around the planet. The vast majority of the targets were third world and underdeveloped countries, many without the resources or knowledge to defend themselves. These efforts were in preparation for their biggest effort to date: The US 2016 Presidential Elections. As we have rounded the corner for the 2018 mid-term elections, Facebook and their capabilities loom large, especially when there is no buy-in from the topmost echelon of political leadership. Your data is no longer your own. Facebook wants it all and they want it now to weaponize their most valuable product — The User. To read more about Russian nation state hacking of the US Elections and how cyberattacks come together, check out a two-part collaboration between Acreto CEO, Babak Pasdar, and former CTO of the CIA, Bob Flores, here. Learn more or read online by visiting our web site: Acreto.io — On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto IoT Security podcast. You can get it from Apple – Google or your favorite podcast app. About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Bloomberg Spy Chip – Bullshit? This is Part 1 of a two-part investigative deep-dive into the accusations of Bloomberg’s recent article, ‘The Big Hack’. Written by Bob Flores, former CTO of the CIA, and Babak Pasdar, CEO of Acreto IoT Security. In a recent blog, Babak Pasdar highlighted a Bloomberg report that claimed China had embedded hardware spy chips on servers from Supermicro. Supermicro provides data-center servers used by many companies from small startups to the likes of Amazon and Apple. Bloomberg claims that the spy chips were discovered by a security auditor hired by Amazon AWS. This audit was part of an acquisition due diligence of Elemental Technologies, a platform specializing in multi-screen video processing. Bloomberg claims that Amazon and Apple are among the organizations impacted by the alleged Chinese spy chip. And one-by-one they have all denied that the story has merit. However, Bloomberg, a model agency in news reporting, has refused to offer any additional information or alternatively to pull the story. There is a lot about this story that doesn’t pass the smell test. If Supermicro servers have been compromised, it is a huge story. Though not a household name like Dell or HP, Supermicro is one of the top data center server platforms on the market. It is considered to be a good product with global availability at a fair price. In the article, Bloomberg makes a pointed accusation yet offers evidence that at best is vague. In the previous blog, we asked several questions: Who was the Security audit company that discovered the spy chip? How did they get access to schematics to do chip by chip validation of the hardware? Schematics that in any scenario would be considered trade secrets. If the spy chips were secretly installed by a Supermicro contractor as the article claims, who QA'ed the hardware and why was the chip not discovered during the QA process? Given the emphatic and detailed denials by both companies and the U.S. government, why has Bloomberg not released more detailed data to back up their claims? The implications are that China has backdoor access to countless systems, hosting applications and data, impacting thousands of companies and millions of individuals. The integrity of corporate, government and critical infrastructure is at stake – as well as personal data for large swaths of the population. Is This Realistically Possible? Bloomberg provided very little detail, and what they did provide was at best vague and not evidence-worthy. Based on the information they did provide, the industry take-away is that this vulnerability is via the server’s IPMI interface. IPMI is an always-on IoT embedded in a server to manage the hardware, even if the server is powered off. As presented, the IPMI platform can theoretically be manipulated to function as a back door, providing access to the server’s network, system memory and the system bus. You can learn more about this in Pasdar's previous blog on this issue on our website. Having said that, for Bloomberg’s vague spy chip explanation to work, you need a Supermicro motherboard with an on-board IPMI, and then many, many, many things have to line up for the compromise to work. First, an Internet accessible IPMI connection with stateful outbound access is needed -- something no self-respecting organization with even a moderately experienced infrastructure team would have. The chip Bloomberg presented in their article is just physically too small to store and execute the necessary code to fulfill its purpose, so it would also need to connect and download software from an external server. Hackers will never use an external server they own that references back to them. It would lead authorities right to them and there would be no plausible deniability. The server is most likely another compromised system on the Internet. Moreover, the external server's address isn't hard-coded into the chip. Compromised servers are disposable since the compromise may be discovered and addressed at any point – or the system moved or decommissioned. If this occurs, the entire effort of the compromise would be a complete waste. A process like fast-fluxing or something similar would be used to enable the spy chip to connect to an ever-changing botnet network of external servers. Fast-fluxing was specifically developed to control botnets without compromising the bot-master's identity. It is a technique where the spy chip and the external server would meet to communicate at a particular fully qualified domain name (FQDN) at a particular time. Many Different FQDNs spanning many different domains may be used to deliver content to the spy chip based on the then valid compromised IP addresses hosting the malware. The spy chip then needs to integrate into the server's OS, on-the-fly, during the boot process. This requires injecting the appropriate code for the specific OS used on the server. The OS could be one of dozens, if not hundreds of possible options since the Supermicro B1DRi motherboard that Bloomberg claims is compromised, is certified compatible for many different OSes and associated versions. This includes 32-bit Red Hat, SUSE, Ubuntu and FreeBSD as well as many versions of 64 bit Red hat, Fedora, SUSE, Ubuntu, Solaris, FreeBSD, Centos and Windows. Further, it also supports multiple hypervisor versions of VMWare, KVM and Xen Server, not to mention Amazon AWS's proprietary hypervisor. Each one of these OSes needs a different code. Even each version of the same OS may require an altogether different code to be injected into the compromised system. Consider how quickly the spy chip would have to act to intercept local boot code, determine the OS brand, distro and version from a smattering of code flying on a computer's bus, perform the fast-flux operation and fetch the appropriate compromise code from the appropriate server. All of this -- which is a lot -- needs to happen for the spy chip to work. Next Up: Bloomberg Spy Chip – Bullshit? Part 2: Let’s Break Down the Claims. Learn more or read online by visiting our web site: Acreto.io — On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto IoT Security podcast. You can get it from Apple – Google or your favorite podcast app. About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
This is Part 2 of a two-part investigative deep-dive into the accusations of Bloomberg’s recent article, ‘The Big Hack’. Written by Bob Flores, former CTO of the CIA, and Babak Pasdar, CEO of Acreto IoT Security. Bloomberg Spy Chip - Bullshit? Part 2 Now let’s break down Bloomberg’s claims further. In the article they present a graphical image of a Supermicro motherboard and strip away components until the spy chip can be seen. The motherboard they present is a Supermicro B1DRi with an AOC-GEH-i4M add-on module. As shown on the Supermicro web site, the B1DRi is designed to host up to two Intel E-2500 v3 slash v4 CPUs and up to 256 Gb of 288 pin DDR4 memory and can be mounted to a sled with its own hard-disks. However it is not a standalone server and needs to be mounted in a Blade Enclosure to function. The enclosure provides power, hosts a network switch and most importantly has a shared IPMI management board plugin. If the spy chip works through the IPMI, how can Bloomberg show the spy chip placed on the motherboard, when the IPMI for the board is an external module in the enclosure? It looks like the IPMI must be individually linked to each server blade to manage that blade. The IPMI IoT is an external module plugged into the enclosure and to be used, it needs to be individually assigned to each of up to 16 server blades in the enclosure. If that is the case then there is a 1 in 16 chance of compromising a server and even then, it would be opportunistic and inconsistent depending on which blade the IPMI may be set to manage on boot. Now – let’s discuss the chip Bloomberg presented in the article. If the insanity of the logistics to effectuate this hack is not enough to make you call Bloomberg’s story Bullshit, then their presentation of the spy chip should. The chip presented IS NOT A SPY CHIP, it is an RF Balun. A standard, off-the-shelf Surface Mount Device (SMD) that converts between balanced signals and unbalanced signals, hence the name Bal-Un. If you look at the Stesys or Farnell websites, they are two of the many component providers who sell them. You too can have one for a mere $1.67. And if the pictures were supposed to be mere examples of what a spy chip might look like and the type of motherboard it could be embedded on, they certainly did not present it that way. Also, consider that a motherboard is an incredibly complex piece of equipment. These types of motherboards need to be extremely high performance and extremely compact at the same time. This makes them extremely dense. They are almost always multi-layer boards where traces connecting the various electronic components exist on as many as a dozen different layers. And these systems are delicate, their operation requires the various electronic components to operate harmoniously. Frankensteining hardware to the system would be at the very least — challenging. The majority of people within a company involved in R&D, design, procurement, manufacturing and testing of the motherboards are often sequestered into groups with access that is limited to specific functional domains. Very few people have complete access to the designs and schematics for the entire board. And this almost never includes subcontractors or some small security company out of Canada doing technical due diligence for a mundane acquisition. Furthermore, the people charged with manufacturing are typically not the same people who do quality assurance (QA). The job of QA is to test every permutation of every function. We have to believe that QA’s most fundamental tests would catch something as overt as communications where the spy chip tries to identify, fetch and inject packets on-the-fly. The number of people that would need to be turned or paid off would be staggering. As many as 30 – 50 people would need to be engaged throughout the supply chain spanning multiple companies and countries. An amateurish and incredibly messy way to run a covert op. How Everything Comes Together. Because of the vague assertions, it is tough to argue definitively that any one aspect of the article is wrong, however when you put it all together: 1. We don’t know of many security companies that do reverse engineering on PCs as part of their due diligence. 2. Schematics are trade-secrets and almost never available for complex multi-layer motherboards. How could the security company have had access to schematics? 3. The sheer number of people that need to be involved in implementing the spy chips is staggering and doesn’t make sense for this type of effort. 4. The QA process, one known to be particularly meticulous, never caught the issue. 5. The ridiculous complexity of the hack where the sun, the moon and the stars have to align for it to work. 6. Not only is this compromise overt and easy to identify, but the vast majority of organizations have built-in defenses against this attack vector — especially Apple and Amazon. 7. The need for an Internet accessible IPMI network. 8. The need for the chip to fast-flux, connect to a remote system and pull-down compromise code while the system is booting. 9. The complexity of pulling a different code set on-the-fly for each of the hundreds of unique operating system and revision combinations. 10. The B1DRi motherboard being part of the blade system without any on-board IPMI, which can only be managed one blade at a time. 11. The vagueness of the charges and lack of any supplemental follow up, while Bloomberg continues to sit silent. 12. And trying to sell us that an off-the-shelf $1.67 RF Balun is a spy chip. For these reasons, many of us believe the Bloomberg story just doesn’t have a leg to stand on. Bloomberg has made explosive allegations. They have had a drastic negative impact on Supermicro’s stock price — down 50% as of this writing. Their story is barely, if at all, viable. The information they provided was amateurishly vague. Their silence in the face of the backlash speaks volumes. And yet they continue to stand by their story and not recant. Add Bob Flores and Babak Pasdar to the growing list of skeptics. If you have evidence, then present it and if you were conned it is understandable – but please stand up and own it. Learn more or read online by visiting our web site: Acreto.io — On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto IoT Security podcast. You can get it from Apple – Google or your favorite podcast app. About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Security Shaming the Security Ostrich – Let's Make It A Thing By Bob Gourely, ex-Chief Technology Officer for the Defense Intelligence Agency, and Babak Pasdar, CEO and CTO for Acreto IoT Security We recently had a conversation with the CEO of an IoT manufacturing company to learn more about their strategy for IoT security. The conversation started with his immediate declaration, “Our IoTs are secure!” “You see” The CEO continued, “we use encrypted connections for all of our IoTs”. Given his bold tone, we waited to hear the rest - it never came. We then inquired how he controls access - validates the integrity of the communication - verifies the integrity of data - validates the exchange of functional commands - and handles privacy and identity of the devices. He responded, “You have to understand that our devices aren’t smart enough to be hacked.” It was a dumbfounding response! We asked if his IoT devices use IP. “Yes,” he replied. Are they on the Internet? Again, “Yes”. Respectfully, is it possible they are just not smart enough to know they’ve been hacked? We went on to explain that even “dumb” IoTs are susceptible to and have been involved in many recent high-profile attacks. We even offered two examples of vulnerabilities that impacted devices like his. However, he was dismissive and unconvinced. This technology CEO is a security ostrich choosing to bury his head in the sand rather than educate himself, hear different perspectives, and accept input from others. In another instance, at an event with Maciej Kranz, we met a CTO for a solution provider exclusively focused on building custom IoT-centric applications. We asked this CTO how the organization handled IoT security, the CTO’s answer was simple: “We use the certs from Amazon”. We dug further and asked how these certs secured his customer’s IoTs and applications. He said, and I quote: “Not sure. It’s what Amazon offers -- they wouldn’t sell something insecure”. Though the exact opposite of the CEO above, this CTO is also a security ostrich. He had no curiosity about what happened to the platforms they developed for their customers. We have seen many other examples where savvy security officers take what they believe to be prudent steps to help mitigate risk for their newly developed IoT infrastructure. This is a difficult problem, and we empathize with any technologist trying to optimize their IoT security. Their challenge -- utilize enterprise security tools and approaches for IoT Security. A case in point is an effort by a CISO of a Fortune 500 company who tried very hard to segment his industrial IoT devices into separate networks – a very prudent step. He then acquired a commercial software product that operates at the network level specifically to help improve security. It acted a bit like the old Kerberos solution in computer security, where a separate server gives permission for devices to join and communicate on the network. The problem with this approach is that we have not seen these enterprise security methodologies and technologies scale to the size IoT infrastructure requires. But a bigger problem is that even if it works, it does not prove that a device operates securely once it is allowed on the network. Until now, that kind of magic has not existed. This is a case where the CISO was trying to use yesterday’s security tools to solve a next generation problem, because that’s all that was available. When the only tool you have is a hammer, you have to treat everything like a nail. We exist in a time of unparalleled connectivity. With all the good that this connectivity serves, it also creates exposure. Exposure today is greater than ever and modern countries – especially the US – are the most exposed. Cyber attacks don’t just impact systems, data, publicity, and stock prices – attacks today impact economies and democracies. IoTs are driving a dependency compute model where each IoT, their dependent applications and associated management platform all exist on many different public and private networks. Customers no longer control the the entire infrastructure on which their IoTs and applications operate. This is why traditional enterprise security tools and approaches, designed to protect concentric networks, just don’t work for IoT security. Especially when multiple IoTs exist on a shared network – where each has a different function, for different use-cases and each using different remote applications, operated by different entities. When different applications that are owned by different organizations service IoTs sharing a common customer network, all the different networks, IoTs and applications become exposed and vulnerable. It’s not only that these devices are susceptible to compromise. Or that a compromised IoT impacts the integrity of the application and dataset it serves. It’s not even that the company’s customers and the customer’s customers are impacted. By putting these vulnerable devices on the Internet, IoTs become force multipliers to launch new and more menacing attacks on many other public networks, systems, applications, and datasets. And with the prevalence of Clouds, everything is public! IoT manufacturers and development shops should practice greater scrutiny regarding their IoT security. Despite an IoT’s small size, with IoTs, everything is bigger. If the overly confident CEO and disengaged CTO don’t respect IoT Security for their own product, company, and customers, then they should at least consider the impact their actions, or inaction, has on the rest of us. Isn’t it time we started treating security like littering? Maybe we should make security shaming a thing. Where the entire cyber community gets involved in security shaming those who are reckless, disassociated and especially the inappropriately bold. Essentially all cases where those in the industry who are in a position to enact impactful change, choose not to act. Could security shaming drive the change the IoT security industry needs? Perhaps! Better yet, we should treat security much like a public health crisis -- where even a single instance of an outbreak is treated with the greatest sense of urgency by the entire community. The behavior of the security ostrich is rather formulaic. Focus on functionality. When the system is reasonably functional, then focus on performance. And when it’s performing reasonably well, then and only then do some turn their attention to security. By this point, the only options are bolt-ons and band-aids. Moreover, some deploy self-centered risk–reward IoT security where they choose not to enact security at all. In other words, there are times when it costs more to secure some or all platform assets than their worth to the organization. Though this may look like a business decision, in actuality it is a myopic perspective that empowers hackers – against everyone! Regardless of the asset value, securing all assets with uniform and consistent security has a dramatic positive impact on the security big picture for everyone. What is suggested here is akin to the “broken windows” policing model where eliminating the small crimes dramatically reduces the big crimes. The IoT industry is still principally focused on function. Everyone is trying to get their heads around how to make everything actually work. However, it is precisely at this stage when there should be a focus on security – during the architecture and design phases. We can no longer sit back, look from the outside in, shrug and say it’s their problem -- not mine. If there is one thing that the massive denial of services, botnets, ransomeware, and data thefts have taught us is that the security weak links on the Internet are weaponized against everyone. In one case the CEO was inappropriately confident, in another the CTO was disengaged and trusting to a fault. These security ostrich executives hurt all of us – perhaps their actions are not malicious, but definitely negligent. And their actions impact business and consumer, global enterprises and family operations, Americans and Allies, us – you – everyone. Most importantly, business leaders, tech executives or the tuned-in slash concerned participants of the tech industry should learn a lesson from their errors. However, the CISO truly cared about doing the right thing and was failed by the industry’s lack of viable options to the IoT Security challenge. This is especially true when cloud, IoT, and dependency compute is involved. In this case the security industry is too conservative and looks down on progressive approaches. And progressive approaches is precisely what this CISO needed. Let’s invoke an old Internet term that needs to be resurrected. Be a good Netizen. Some, if not the majority of the effort for IoT security falls on the manufacturers and developers. They have to provide viable options for the industry. But at the same time customers and solution providers should be thoughtful and mandate security that drives the manufacturers and developers. Think of it this way: Anyone who ignores IoT security, recklessly and negligently drags their muddy shoes across everybody else's clean white carpet – when they should know better! Read the original 'Security Shaming' article here. Listen to the next podcast, Putin’s Eleven – Inside Nation State Hacker Teams, here. About Acreto IoT SecurityAcreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Russian Nation State Hackers & What We're Not Doing About It. - By Bob Flores – former Chief Technology Officer of the CIA & Babak Pasdar CEO and CTO of Acreto IoT Security The effective use of Russian nation state hackers led to a hacked election that has resulted in a hacked America. We're still licking our wounds and not doing anything about it. In fact we are arguing if it happened at all! Cybersecurity strategy incorporates the confluence of technology, business and geopolitics with so many moving parts that to call them complex is an understatement. Strategies must span multiple geographies across a plurality of nations and continents. That is why no one can “go it alone”. Today we need our friends more than ever – not just for geopolitics, but also for cyber defense. Collaboration is the underpinning of cybersecurity. As the largest global economy that comprises infrastructure, industry, enterprise and institutions, the US is the most technologically advanced. Many American companies span the globe making them one big glass house while the rest of the cyber world are kids with rocks on a dare. These "kids with rocks" fall into four major categories. First, there are hacktivists, who hack for their cause. The most well known of these being the loosely bound group called Anonymous. The second category is terrorist organizations such as ISIS and Al Qaeda. These organizations recognize cyber warfare as a cornerstone to their mid to long-term strategy and are working feverishly and investing heavily to get them to maturity. The third group is financial hackers. The best way to describe financial hackers is the Mob and Cartels' online arm. And finally, the most dangerous are state-sponsored hackers. Even though they operate behind triple or quadruple blind systems, which makes tracking them extremely difficult, they can be identified by their unique hacking techniques or fingerprints. Nation state hackers are not the moody lone-wolf nocturnal teenagers cranking death metal and surviving on Amp energy drinks. That's a TV cliche. And hacking is not an organic game of pickup, where individual hackers are swapped indiscriminately. Nation state hackers are carefully curated teams that train, collaborate and solve problems together. Not only do they have to get along and gel over time, but they have to build and test many foundational tools they need to perform the advanced objectives they are charged with. Sometimes this can take years! Lets Talk Hacking Fingerprints: Cyber-threat intelligence organizations that monitor and track Advanced Persistent Threats – APT.s - use their threat fingerprints to build a profile on each team over time. The collection of fingerprints defines each team, otherwise called an APT. The profile fingerprints for the Russians, Chinese, North Koreans and Iranians all vary. Each APT, or different hacking group, is assigned a unique number for identification. For example, APT37 is North Korea, APT34 is Iran, and the American election hacks are associated with APT.28 and AP.29 - which are obviously Russian nation state hackers. In fact, APT.28, otherwise known as "Fancy Bear", is a completely different team than APT29, "Cozy Bear", both of which work for the Russian Government. As an example, here is a sample of the fingerprint for Fancy Bear – APT28- that has been tracked since 2007, and the reasons for American intelligence agencies' confidence in Russia as source for the election hacks: Here are some quick hit details for APT28: Its Target Sectors includes: The Caucasus, particularly Georgia, eastern European countries and militaries, North Atlantic Treaty Organization (NATO) and other European security organizations and defense firms. APT. 28 is focused on Cyber-Espionage As a summary overview: APT28 is a skilled team of developers and operators collecting intelligence on defense and geopolitical issues—intelligence that would be useful only to a government. This APT group compiles malware samples with Russian language settings during working hours (8 a.m. to 6 p.m.), consistent with the time zone of Russia’s major cities, including Moscow and St. Petersburg. This suggests that APT28 receives direct ongoing financial and other resources from a well-established organization, most likely the Russian government. Tools commonly used by APT28 include the SOURFACE downloader, its second-stage backdoor EVILTOSS and a modular family of implants dubbed CHOPSTICK. APT28 has employed RSA encryption to protect files and stolen information moved from the victim’s network to the controller. It has also made incremental and systematic changes to the SOURFACE downloader and its surrounding ecosystem since 2007, indicating a long-standing and dedicated development effort. Known operations include Operation RussianDoll where Adobe & Windows Zero-Day Exploits were Leveraged in highly-targeted attacks. There are other means for determining the source of attacks. Aside from fingerprinting, intelligence agencies do track the sale of zero-day exploits purchased on the markets. Zero-days are exploits for previously unknown vulnerabilities. There are numerous commercial and underground organizations whose business is finding, exploiting and weaponizing vulnerabilities. Once the exploit is developed, it's put up for bid - and governments are the most affluent bidders. Commercial organizations offer them for sale on the public market to sanctioned agencies, while underground groups sell their exploits on the black market – Dark Net - to the highest bidder indiscriminately. In the case of juicy exploits, the buyer may pay significant sums for the privilege of exclusivity. The buyer wants the advantage of a weapon that nobody else has. All governments use a variety of proprietary techniques, technologies and informants to track the exploit inventory of both rival and ally countries. Ultimately the recourse to cyber attacks is a blunt instrument in the form of counter-attack. Counter attacks may include counter hacks, economic sanctions, embargoes, or a combination. However, for a government to get involved in countering attacks large organizations or critical infrastructure are usually involved and even then it is reserved for the largest and most egregious attacks. American election compromise is such an example. At this particular point in time, America has opted for a "go it alone" approach to global relationships. Collaboration on cyber issues is not exempt from this. As the occupant of "The Big Glass House" in a world of rock-throwing kids, especially Russian nation state hackers, America needs its friends more than ever. Even though we have been hacked, America is still Not Minding The Store. Collaboration between government and commercial threat intelligence is key to a successful cyber strategy. The nation’s top intelligence officer, Director of National Intelligence Dan Coats, indicated on Friday, July 13 and I quote: "persistent danger of Russian cyberattacks today was akin to the warnings the United States had of stepped-up terror threats ahead of the Sept. 11, 2001, attacks. The system was blinking red," Coats said. (nytimes.com) "Here we are nearly two decades later and I’m here to say the warning lights are blinking red again. Today, the digital infrastructure that serves this country is literally under attack. Every day, foreign actors - the worst offenders being Russia, China, Iran, and North Korea - are penetrating our digital infrastructure and conducting a range of cyber-intrusions and attacks against targets in the United States". Recently, Congress has zeroed out nearly $400 million from the fund used to protect the integrity of our election and has blocked subsequent efforts to fund it across partisan lines. In April 2018, the White House Cybersecurity coordinator was relieved from his role less than six months from the November elections. As of the end of July no replacement has been named. Moreover, tough sanctions passed by congress in July 2017 are yet to be implemented as of July 2018. It may be too late for anyone to take the helm and implement meaningful protections at such a late stage. Collaborating to stop these attacks requires leadership, funding, a competent team, communications and sharing. At this point in time we have the competent team members in the form of our intelligence agencies that are raring to be let loose. However there is no leadership, no mandate and no funding. We also find ourselves in a strange situation with sparse dialog with our allies due to newly formed political trust issues. The patient is not in trouble because a first- year med student is the surgeon. Rather, the patient has been abandoned by the surgeon with little time to live while the operating room is dark because nobody paid the utility bill. Next in this series we will look at an example of Russia's nation-state hacking teams and their construct in our blog: Putin's Eleven – Nation State hacker teams uncovered. Learn more by visiting our web site: Acreto.io -- On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto Crypto-n-IoT podcast. You can get it from Apple – Google or your favorite podcast app. About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Blockchain, it slices, dices and juliennes, but is there a Blockchain security Function? The industry portrays that Blockchain will solve the world’s woes. Legacy companies like IBM, HP and Dell are touting Blockchain as the cure-all for anything and everything. Blockchain security seems to be the latest craze. In fact, the ‘Blockchain as a security savior’ message is so ubiquitously promoted and repeated, it has become an accepted fact. For many, Blockchain is not just secure – Blockchain IS security. We’re here to tell you its not. Here’s why: Crypto technologies and its variances such as Blockchain were designed to fulfill the following capacity as… Denomination - Blockchain functions as crypto-currency, with a specific market value. Transaction Processing - Blockchain exists as a denomination-independent way to process financial transactions, similar to a credit card. Data Validation - Blockchain validates and verifies non-financial transactions and content. Blockchain provides a decentralized way to process and validate transactions. This is done over public networks while the transacting parties and the processing parties maintain their anonymity. Once the transaction is validated, it is documented in a public ledger shared across many systems. These make up the Blockchain network. Business applications are built on multiple components. These include endpoints, systems, hardware, programs and data-sets, all of which have exposure points, referred to as an attack surface. Application platforms that use Blockchain are no exception. Though Blockchain is not susceptible to manipulation or fraud while in transit, it does nothing to secure the multiple attack surfaces and associated vulnerabilities of the platform components. This means the endpoints, servers, applications and clouds that make up the platform remain vulnerable. A compromise of any of these systems could allow the attacker to forge seemingly legitimate Blockchain transactions. The end result? A transaction that appears to be made by an authorized user and endpoint which is processed by an authorized application. Blockchain is incapable of offering any protection in this scenario. So what drives the industry to tout Blockchain is security? Even though proper cyber-security requires multiple functions (ie: identity, controls, privacy and threat management among others) to protect the entire application platform, Blockchain is limited to ensuring the integrity of the transactions. Without the implementation of other security functions, the entire platform remains exposed and vulnerable. Blockchain protects the transaction in a very limited and granular way. Yet large swaths of the industry believe it is a new way to secure entire technology platforms! No doubt, this is an undesirable byproduct of marketing departments gone wild. In their clamor to “simplify” the complex nature of Blockchain, they have managed to confuse, convolute and even misdirect. It’s like paypal claiming that they protect your bank account. There are many benefits to using Blockchain as a denomination, for financial transaction processing or non-financial data validation, but not Blockchain security. the sooner the industry is clear about the practical application of Blockchain, the more confidently it can be used in business applications. With that, Blockchain’s growing use in real business applications can even stabilize the turbulent and unpredictable coin markets. Here is one of the articles that mislabels Blockchain's function - Blockchain Security: What keeps your transaction data safe. No company is more guilty than IBM BONUS: The blockchain craze has taken such a life of its own that we created this spoof based on old infomercials called Bloxychain! About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Russian Hacker Caught and Convicted: From US With Love. Written by Babak Pasdar, CEO and CTO of Acreto. A little while ago, a client called me in to do a security operations ‘best practices’ education session. They were a dot com site that had recently spun off from one of the major financials. They had not yet laid down their sec ops roots and were still engaged in establishing the fundamentals. They wanted an informal education session to get the entire team on the same page. Their conference room was packed with their security team as well as several people from their operations center, which I had requested. In many instances, the ops team is on the front line and often identifies and conducts the initial steps in handling security incidents. At some point during the session, I started to talk about scammers. One trick that malicious people use is to acquire domain names that are similar to the site they are targeting. Since the client was a financial and their site contained personal information for hundreds of thousands of consumers, and was an attractive target. I first recommended they acquire or actively monitor all sound-alike and similar domains. For example, if their domain name is jacks.com, spelled J A C K S, they should acquire or monitor the domains spelled J A X dot com and J A K S dot com. Second, I recommended that all permutations of domains that could be mis-typed by users should be acquired or monitored as well; specifically, any combination of surrounding characters on the keyboard for each letter that makes up their domain name. For example, if their domain name is abc.com, they should monitor domains where the ‘A’ in abc.com is replaced with S, X, Z, W, and Q. If a company wanted to take it a step further, they would cover the immediate two surrounding characters on the keyboard as well. Should users mistype, which they often do, they should not be directed to a look-alike site that they would innocently offer their credentials. Third, I suggested that the plural version of the words included in their domain name should be acquired and monitored. As I was making this third point, I typed in the plural of their domain name – and their site showed up. I thought I had made a typo, that through muscle memory I had entered in their correct domain name. I double checked, and I had typed exactly what I intended to type – the incorrect, plural variant. I was impressed. I thought to myself that they were ahead of me and had already acquired the plural domain and redirected it to their site. “Smart! You guys already got this?” I said to the group. I looked around the room and saw confused expressions all around. Finally, someone said, “I don’t think that we did – I’m pretty sure we didn’t.” After a Dig on the Fully Qualified Domain Name (FQDN) and an MTR (a better traceroute) it became clear that the site was not theirs. It looked exactly like their site including the login page. However, it was not using their IP block nor any of their ISPs. It traced back to Las Vegas, Nevada. Needless to say, the training session abruptly ended and became a real-life incident response. The organization’s executives, their general counsel, all security team members, and all IT managers and above joined an emergency meeting in the conference room. Anyone not on-site joined via conference bridge. During the meeting, their sharp help-desk manager offered that he had seen an increase in the number of calls for password reset requests in the past two weeks. We started connecting the dots. We came away from the meeting with several action items: • We needed to determine if there was a compromise, and if so, how many users it impacted and its duration. • The help-desk team set out to cross correlate password reset support calls and the date/time of failed authentication logins in their logs. • They would identify any users who called for a password reset whom had no corresponding failed login attempts in the logs. There was roughly a dozen dating back only two weeks. • The help-desk team contacted these users and established completely new identities for them. • My team was to implement an emergency infrastructure should the malicious person attempt to use the stolen identities. • I reached out to my contacts in the FBI cyber-crime team and reported the issue, and Agent Brown from the New York cybercrimes team was assigned to our case. • We contacted a law firm with experience in cyber crimes along with the organization’s retained counsel. • The legal team started to outline a notice as was required by compliance in preparation, should notifications be necessary. After this, my team members and I set out to execute on a plan to identify and catch the person. First, a honeypot. The compromised user credentials correlated by the helpdesk were redirected to a training system that looked and functioned just like their application, but contained dummy data. With this in place, the risk that any (more?) data theft, manipulation or deletion was mitigated. Then, we implemented a high performance packet capture system using a powerful server, hardware offloading network interface and several open-source tools to collect all communications from the malicious person/people. We made sure that the packet capture system was implemented and processed with proper evidentiary chain of custody standards. Finally, we configured the units to send us text messages as soon as any of the compromised accounts were accessed. We were finally ready to track the malicious people. In less than forty-eight hours we architected, acquired the highly specialized equipment required, and configured and tested the infrastructure. I then set out to document everything, including the operations runbook for these new systems, which included evidentiary chain of custody handling of any evidence collected. I personally spent near seventy-two hours straight at the customer’s data center hopped up on adrenaline and coffee. It’s rare to catch hackers and scammers, and I felt strongly that we had a good chance of doing so in this case. In the meantime, the FBI requested and received a subpoena for the IP address of the server as well as the domain name registrar. Fortunately, the ISP provided the physical address associated with the identified IP address quickly. Agent Brown called the FBI field office in Nevada and requested agents drive by and visualize the address location. A few hours later we received information that the address was actually a car dealership. The FBI agents in Nevada managed to trace the ISP connection to the basement of the dealership. When they inquired about the Internet connection, the dealership informed them that the basement was rented to another party who was hardly ever there. Technically, the malicious people had not done anything substantially criminal. So between the customer, the FBI and my team we decided to hang back and wait for the malicious people to attempt access to the customer system, and more importantly, to download personal identity information. There was no risk to any of the site users since the data the malicious people would access was made up training data. We didn’t have to wait long. At 3:00am early morning the following day my phone started buzzing with alerts. I quickly logged on to see what had transpired. Jackpot! The malicious people had logged on under three different accounts and had systematically accessed multiple identifies before generating a report that can only be identified as an identity theft starter kit. A quick check showed a Canadian IP address as the source. Every packet of the communications was collected and logged. We had all that was required to completely recreate and replay the malicious people’s entire effort. The session was short. It had only lasted 15 minutes. But it was all that was necessary. There were no other attempts that day. Early the following morning, we contacted Agent Brown and the cybercrime task force supervisor and arranged for collection of the evidence. During the call we also determined our next course of action. The FBI could have reached out to the Canadian authorities, but thought it best to try to lure the person to the US. The plan was that the FBI would get a court order to confiscate the computer in Las Vegas. If they spotted cameras they would simply disconnect the Internet connection at the Network Terminal outside the building. And then – the FBI surprised us. They had a person of interest in the case. They did not share many details about how they found this person of interest. Our best guess is that the person had been on the FBI’s radar, and had somehow been associated with the stolen identity which was used to fraudulently pay for the acquired domain name and the Las Vegas basement housing the computer. If all was to go as planned, the malicious person would think there is a technical issue and come to fix it. Later that morning the FBI Agent Brown came to our offices and we held an evidence hand-off ceremony. The next day we noticed that the scam site had gone down. Now there was not much else for us to do but wait. All was quiet for a while and life started to resume normalcy. Two weeks later we got word that there had been an arrest! It was a Russian whom a few days after the site had gone down had flown to Canada and from Canada to Las Vegas. He was arrested at the airport port of entry. Apparently, when presented with the evidence he made a plea bargain and soon after plead guilty at the hearing. The team’s dedication, professionalism and expertise drove this incident’s success. Both the customer and my team operated flawlessly together, and the FBI came through in a big way. At a time when hackers attack indiscriminately, it felt great to catch one and snag a win for the good guys. Learn more by visiting our web site: Acreto.io — On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto Crypto-n-IoT podcast. You can get it from Apple – Google or your favorite podcast app. Next up -- read about Russian Nation-State Hackers and What We're Not Doing About It here! About Acreto IoT Security Acreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
The Business of Security vs. Security of Business Written by Babak Pasdar, CEO and CTO of Acreto. The security industry has spent a lot of time over the past 30 years thinking of imaginative ways to put lipstick on today's cybersecurity pig. It's like a one hit wonder band who never adapted, playing the same song and putting on the same show over and over, even though their fans, the industry and the zeitgeist as a whole have evolved and transitioned. We are more distributed and mobile than ever. Yet the security industry remains unevolved, putting on the same show – playing their all-time favorites like “On-Device Security” and their mega-hit “Gateway Security”. Gateway security is an especially nuanced piece with broad range. There’s the firewall, intrusion prevention, VPN gateway, the proxy, url and content filters, and the component that binds them – SIEM. And that’s the consolidated version of a lengthier and more complicated original score. Compute has changed and continues to change dramatically in front of our eyes. Clouds, SaaS, Mobile devices and the big daddy of them all – IoT – are contorting traditional security models and tools in ways never intended – until something breaks. And today, everything is breaking since security as we know it dates back to the medieval ages. Let’s Get Medieval On Security. The king builds a castle (the network), puts a moat and draw-bridge around it (gateway security) and posts sentries at the gate with special instructions (security policy). Need to operate outside the castle? If you have the strength (compute resources) and are wealthy enough to afford it (budget), you can put on custom armor (on-device security) and head out as a knight (remote user). Being a knight is exhausting though. Yes, you are well protected, but it burns a lot of energy (security team resources). However, commoners have to assume risk and live in a state of constant vulnerability. Clouds and IoT have driven the vast majority of our functions and users to operate "outside the castle". In fact, the business of the king’s court is now distributed. Commoners live and work remote, never needing to step foot in the castle. There are even scenarios where some commoners operate and service other kingdoms near and far. When the court subjects are remote and distributed, the king has two options – insist on keeping the castle, moat and drawbridge or adapt. So far the security industry has bitterly resisted adapting. Why -- Tradition? Lack of alternatives? It's what they know? Or a combination of these. Gateway security still has its uses, however, the gateway security model is long in the tooth and its use-cases diminishing by the week. And on-device security has been an expensive, ineffective and unsustainable failure. How can you package up an entire data center's worth of security functions in a $5 sensor with the compute resources of a Timex watch. What the cloud started, IoTs have finished. In the past compute was network-centric, now it is distributed all over and even mobile. And we like it. Initially CISOs tried to control users by saying no to cloud and SaaS. Users wouldn't have it. They shrugged, walked away, and did it anyway. There was no putting that toothpaste back in the tube once they got a taste of cloud and SaaS. Compute and technology has been democratized, however the way we secure is still medieval. We have offered hackers the overwhelming advantage all the while spending billions and billions on security. Vendors continue to monetize on medieval security tools ill-suited to the new dominant compute model. How does this make sense? There are a few reasons: First, it's what people know and have bought into. There are 30 plus years of approaches and methods, tools and technologies, processes and performance indicators that have been developed around medieval security. It has become muscle memory for many who spent years honing their skills around these approaches. Just imagine if suddenly, through magical circumstances, the rule of thumb became NOT to apply pressure to bleeding wounds. The countless developed methods, processes, tools, and even tangential functions like billing would be impacted. The result would be chaos! Arguably security is experiencing a mild form of chaos now. Second, there are a lot of vendor-centric security professionals that know and understand security through the prism of a particular vendor. This is not meant to be derogatory since these professionals are the backbone of the security industry. However many are not security operators, they are security product managers. In most instances, along with functional and integration capabilities, security is but one of multiple features that security tools sport. Many security professionals are really, really good at keeping the lights on and packets flowing – and rely on the product do its security stuff. Some vendors are so big and influential that more security professionals than we like to admit are exclusively committed to their tools. These professionals have done the economic calculus and have built their careers around a single brand, strictly based on market opportunity. Many evolve when vendors say it's time to evolve for job prospect purposes. And the evolution of certain security professionals is curiously bound to the vendor's business strategy. An arrangement that benefits the vendor and the professional – just not security. This brings me to the third point: the business of security vs. security of business. It takes many years for new and emerging approaches or technologies to become mainstream. Large influential vendors are focused on squeezing every last bit of economic value from their existing technology investments, while small innovative companies just don’t have the market megaphone. And pay-to-play analyst firms confuse matters further by offering tilted and skewed recommendations. Now, let’s talk about the Cyber Hare vs. the Security Turtle. Hackers are cutting-edge. They are imaginative. They formulate crazy ideas meant to break the rules. The security industry counters with security professionals who are compelled to be conservative – to a fault. Hackers don’t care about function and performance, whereas organizations prioritize both over security. Hackers can experiment and fail countless times, forging their own path along the way, while organizations identify gaps by virtue of emerging product categories. Often it takes anywhere between three to five years, depending on the organization, to implement new product categories for an emerging threat type. At that point the threat is not so emerging anymore! Moreover, organizations befuddle themselves by implementing a process, a very organized one at that, developed to assure failure. This includes assessing requirements, assigning budget, talking to Gartner to see who paid them most, evaluating several brands, selecting a technology, negotiating legal, purchasing, implementation, integration, administration, management, monitoring and troubleshooting. Where is the agility?! Aside from the security functions the product offers, nothing in the process above even comes close to security operations. What does this mean? It means that hackers have a significant upper hand. This upper hand is so overwhelmingly one-sided that it has evolved from having the ability to impact business, to the ability to devastate economies and undermine democracies. Cyber - The Longest War. Today, everyone talks about the war in Afghanistan as our longest running conflict. In the near future this distinction will easily be awarded to the global cyber-war. Every day, much like other security professionals, I see this war from our operations center. I see Russia, China, North Korea, Iran and even some allies wage war against our infrastructure. If not by Name (IP Address), then by reputation (APT). If we have learned anything from the Afghani and Iraqi conflicts it's that success does not always require a standing army. Special Operations have radically shifted the methods of war. Not only is this cheaper and faster, but also more effective to achieve many missions around the world. Today the SpecOps model is being employed in the Syrian conflict. Maybe we should learn from the military and apply seismic shifts to our security approach. Here's how: First, let's eliminate products from the equation. Building one-off security using tools that are ill-fitted to address the emerging distributed and mobile compute model is security suicide. Products are always out-of-date and security teams burn valuable resources performing technology refreshes, managing and troubleshooting products rather than operating security. Security as a utility is a much more effective approach. It is simpler and much faster to sign up and turn on, than to buy and build out! Make implementation easy and let the development, upgrades, updates and keeping the lights on be someone else's problem. The time your team is not spending on babysitting products can be put to better use operating security. Second, fight hackers with (ethical) hackers. Build or train security teams of operators – not product administrators. Make your team critical thinkers who focus on “how to break things” rather than the mundane keeping the lights on tasks. Not all hackers are foul tempered, tattoo laced, twenty-something rock stars with an ego. There are many agreeable, thoughtful and reliable ethical hackers that can serve in foundational roles on your team. Most importantly, empower them and involve them from the beginning at the application design, development and roll out phases. The traditional medieval security model is not failing, it has already failed spectacularly. Arguably, it was never successful in achieving any of the objectives for which organizations have paid billions of dollars. The product management approach to security is like trying to change the wheels while the car is doing a 100 mph. You won't be able to do it and you WILL get hurt along the way. Learn more by visiting our web site: Acreto.io — On Twitter: @acretoio and if you haven’t done so, sign up for the Acreto Crypto-n-IoT podcast. You can get it from Apple – Google or your favorite podcast app. About Acreto IoT SecurityAcreto IoT Security delivers advanced security for IoT Ecosystems, from the cloud. IoTs are slated to grow to 50 Billion by 2021. Acreto’s Ecosystem security protects all Clouds, users, applications, and purpose-built IoTs that are unable to defend themselves in-the-wild. The Acreto platform offers simplicity and agility, and is guaranteed to protect IoTs for their entire 8-20 year lifespan. The company is founded and led by an experienced management team, with multiple successful cloud security innovations. Learn more by visiting Acreto IoT Security on the web at acreto.io or on Twitter @acretoio.
Today Mark sits down with Zach Pelka the CEO of Mineful, a payments framework that easily integrates into any desktop application, enabling users to pay you with their spare computing power. Mineful uses this to generate cryptocurrency and pay users in USD directly to their banks. The pair discuss ways mining can create value for users of platforms like Spotify and games like Fortnite from the comfort of their Macs, PCs, and IOTs. Is proof of work just getting started? Mineful.com @Mineful --- Send in a voice message: https://anchor.fm/cryptoconomy/message
Diane Mullenex is a Partner at Pinsent Masons LLP, leading the Global Telecom & Gaming Practices and Technology Media Telecom Middle East Practices departments, and Co-founder of the Power Women network. In a fast paced interview, she speaks with Kimberley Cole on being bold - from her Japanese upbringing and working in Asia, Europe, Africa and the Middle East to taking on complex digital transformation, including gaming, smart cities, IOTs and cyber security.
Sejam bem-vindos ao ducentésimo quinquagésimo quarto Spin de Notícias, o seu giro diário de informações científicas... em escala sub-atômica. E nesse Spin de Notícias falaremos sobre Eletrônica! *Este episódio, assim como tantos outros projetos vindouros, só foi possível por conta do Patronato do SciCast. Se você quiser mais episódios assim, contribua conosco!*
Larry Karisny, a Post Quantum Encryption, AI and Deep Learning advisor based in the United States participate in Risk Roundup to discuss – IoT Security in a Post Quantum Computer Era. Overview IoT Security in a Post Quantum Computer Era As the smart, autonomous future dawns upon us, the security risks for the rapidly growing […] The post Securing Internet of Things (IoTs) in a Post Quantum Computer Era appeared first on Risk Group.
PubNub is a data streaming network with connectivity in every country in the world and ensures low-latency data sharing through state-of-the-art APIs. An API, or application programming interface, allows one application to speak to another and according to CTO Stephen Blum, there is a chance that every IoT device currently in your home is already connected to PubNub. Using a single network capable of handling hundreds of millions of connections at once, PubNub provides a secure network that allows mobile phones to interact and control IoTs in less than a quarter of second even from the other side of the world. Using patented technology, PubNub has created a unique global network that will find the fastest path between mobile devices and IoTs. In addition to IoT device control, PubNub allows for real-time chat, updates and messaging, as well as mobile push notifications. Looking ahead, Blum sees blockchain technology working with PubNub technology to create connections in a decentralized platform.
How can we combine IOTs and blockchain, and what is possible when you use those two to benefit the customer. At Gadget Astronaut we will be focusing on IOTs devices and findimg a way to use the blockchain to bring value to the customer. --- Support this podcast: https://anchor.fm/gogadgetpodcast/support
Scott Carey necessarily tries to get his pronunciation game on fleek to deliver the latest Uber news - there's a new CEO in town, and it isn't a woman. Should it have been? He tells Henry Burrell what's next for the company. Then roles are reversed as Henry updates Scott on the Galaxy Note 8, LG V30 and September 12's very own iPhone 8 - will it be called that? Will anyone spend £1,000 on it? See acast.com/privacy for privacy and opt-out information.
Host Matt Egan is in a sombre mood this week as the tech industry comes to terms with a Donald Trump US presidency. Staff writer at Macworld UK dives into what this could mean for Silicon Valley, Apple products and wether social media is at fault. Then producer Chris comes on to discuss the latest addition to the VR headset market with Google's Daydream. Will it be held back by a lack of applications though? (14:45) Finally, online editor at Computerworld UK talks about the biggest data breach at a UK bank, as Tesco Bank suffers a £2.5 million cyber theft and what this means for the banking industry as a whole (25:00). See acast.com/privacy for privacy and opt-out information.
Microsoft que presenta su innovadora Surface Studio, claramente inspirada en la iMac y el creador de esta, Apple, con el anuncio del refresco de su línea de notebooks MacBook Pro, a las cuales, esta vez, parece que el calificativo Pro les queda grande. La pregunta del día vino de Felipe Ibarra, un oyente que respondió a nuestro llamado via Facebook Live. Él nos preguntaba qué se debería hacer para mejorar la Internet en Paraguay. ¡Las respuestas se llevaron la primera media hora del programa! La grabación se hizo el 8 de noviembre, #ElectionNight, antes de que se supiera el resultado (no deseado por el panel) de que Tr**mp sería el nuevo @POTUS. Siendo unas elecciones con implicancias mundiales y también en el mundo de tecnología, obviamente tuvo su espacio dentro del episodio. Hablamos de los asistentes virtuales, los desafíos que presenta en su estado actual y los casos de uso típicos de los panelistas. Mencionamos de paso el masivo DDoS que sufrió Dyn mediante una botnet de IoTs y ya que estábamos en el tema de seguridad, profundizamos sobre Certificate Pinning, sus ventajas y desventajas. Mencionamos otros podcasts a los que vale la pena suscribirse: Daily Tech News Show, un podcast diario de tecnología, conducido por Tom Merrit. Ctrl-Walt-Delete, un podcast semanal, con el veterano periodista de tecnología, Walt Mossberg y el Editor en Jefe de The Verge, Nilay Patel. Radiolab, un podcast quincenal, que explora historias curiosas con una narrativa y producción de gran calidad. Iniciamos una nueva sección, TIL (Today I Learned), donde contamos algo que aprendimos desde el último episodio. finalmente recordamos las elecciones de USA que pasaron en tiempo real durante la grabación y a 1984, año en que las películas Ghostbusters, Footloose e Indiana Jones y el Templo de la Perdición, y las canciones Girls Just Wanna Have Fun (Cyndi Lauper), Hello (Lionel Richie), Self Control (Laura Branigan), What’s Love Got to Do With It (Tina Turner) daban el contexto cultural a eventos significativos como el lanzamiento de la venerable Apple Macintosh, la introducción de la primera impresora láser de escritorio, HP LaserJet, la base de datos FoxBase, el diskette de 3.5″, el nacimiento de Dell, Cisco, AT&T y de Mark Zuckerberg.
Das Internet der Dinge wird bereits seit Jahren heraufbeschworen, doch ist der berühmte „smarte Kühlschrank“ – der Zombie des IoTs, wie Martin Spindler ihn in der Podcastepisode liebevoll nennt – noch immer nicht marktreif. Vermutlich wird es ihn aber auch nie geben, weil das Einsatzszenario für ein solches Gerät kaum praktikabel ist – was zeigt: Was es vor allem im Consumer-Bereich braucht, sind Lösungen, die einfach zu bedienen sind und sichtbare Vorteile versprechen. Im Fertigungs- und industriellen Bereich hingegen ist das Internet der Dinge bereits heute Realität: Etwa im Flugzeugbau bei dem Einsatz von Turbinen als Dienstleistung oder im Bereich „Smart Citys“. Mit dem IoT-Experten Martin Spindler spreche ich über den aktuellen Stand der Entwicklung, was es in Sachen Sicherheit zu bedenken gibt und was wir in Zukunft von der Vernetzung von immer mehr Gegenständen erwarten können.
TiO has been bought. Google’s Nest has been hacked. And you can buy a levitating speaker. Plus, what are the specs you look for in a sub? Host: Tim Albright, Founder Guests: Gary Yacoubian of SVS, John Huntington, and George Tucker. Record Date: 8/15/2014 Running Time: 58:25 Story Links Do subs have sound? John Huntington’s AVB recap [...]
TiO has been bought. Google’s Nest has been hacked. And you can buy a levitating speaker. Plus, what are the specs you look for in a sub? Host: Tim Albright, Founder Guests: Gary Yacoubian of SVS, John Huntington, and George Tucker. Record Date: 8/15/2014 Running Time: 58:25 Story Links Do subs have sound? John Huntington’s AVB recap [...]