POPULARITY
This episode dives into OpenAI's promising new model, Strawberry, which could revolutionize interactions in ChatGPT. We explore the financial envy Nvidia employees inspire in their Google and Meta counterparts due to lucrative stock options. Google's new Pipe SQL syntax aims to simplify data querying, while concerns about research accessibility are raised. Finally, we discuss BaichuanSEED and Dolphin models, which highlight advancements in extensible data collection and energy-efficient processing, paving the way for enhanced AI capabilities. Contact: sergi@earkind.com Timestamps: 00:34 Introduction 01:40 OpenAI Races to Launch Strawberry 03:07 Google, Meta workers envy Nvidia staffers' fat paychecks: ‘Bought a 100K car … all cash' 05:01 Google's New Pipe SQL Syntax 06:12 Fake sponsor 07:47 BaichuanSEED: Sharing the Potential of ExtensivE Data Collection and Deduplication by Introducing a Competitive Large Language Model Baseline 09:20 Dolphin: Long Context as a New Modality for Energy-Efficient On-Device Language Models 11:09 Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders 12:50 Outro
In today's episode, Andy has a special guest from our product development team at Hornetsecurity - Jean Paul (JP) Callus. The episode goes into an insightful discussion on how threats have morphed over the years. Andy and Jean Paul recount the days when backup primarily served as a safety net against accidental data loss and hardware failures. Fast forward to today, and backups have become a key weapon in the fight against ransomware and other sophisticated attacks. Tune in to discover the power of modern backups in the ever-evolving world of cybersecurity and how organizations can establish seamless data protection measures, ensuring minimal data loss and downtime in the face of cyber threats. Timestamps: (2:16) – Ransomware continues to drive backup and recovery decisions. (10:10) – How has the industry traditionally mitigated ransomware and how are things done now? (14:13) – Revisiting the 3-2-1 backup strategy and adding an extra “1” (16:10) – Cloud backups and WORM (Write Once Read Many) states. (19:10) – What other backup technologies play a role in security? (23:43) – Deduplication, Immutability, and Backup Episode resources: Podcast EP01: We Used ChatGPT to Create Ransomware Podcast EP05: What is Immutability and Why Do Ransomware Gangs Hate it? Hornetsecurity Ransomware Attack Survey VM Backup V9 The Backup Bible Find Andy on LinkedIn, Twitter or Mastadon Find Jean Paul on LinkedIn This SysAdmin Day, win with Hornetsecurity! If you are a System/IT Admin and use Hyper-V or VMware, celebrate with us by signing up & trialling VM Backup V9 for a chance to win a Pixel Tablet! Find out more information here.
In our latest episode of the Backup to Basics series, we talk about what I think is the most important invention in my career: deduplication. Without dedupe, much of what we do in backup and recovery, and disaster recovery, would simply not be possible. Without dedupe there really is no disk backup market; there is no cloud backup market. I'd be out of a job! What is dedupe, anyway, and how does it work? What are the different kinds of dedupe and does that matter? You should learn a lot about this important topic.
Sometimes running the latest and greatest means you have to pave your own path. This week two examples from living on the edge.
1. Google's Introduces Policy Circumvention - Google has added a new spam policy to its search spam policies - “Policy circumvention.” In short, if you take any action you take to bypass the other Google Search spam or content policies such as creating new sites, using other sites or other methods to distribute that content, maybe on third-party sites or other avenues then Google will restrict or remove the content from showing up in search. Here is what Google wrote:“If you engage in actions intended to bypass our spam or content policies for Google Search, undermine restrictions placed on content, a site, or an account, or otherwise continue to distribute content that has been removed or made ineligible from surfacing, we may take appropriate action which could include restricting or removing eligibility for some of our search features (for example, Top Stories, Discover). Circumvention includes but is not limited to creating or using multiple sites or other methods intended to distribute content or engage in a behavior that was previously prohibited”2. Google's Advice On When You Should Move Your Blogs To A Sub-Domain - John Mueller of Google recently shared his advice on when you should add blogs to a Sub-Domain. John shared that he will move blogs to a subdomain over a www when he thinks the content on the subdomain can live on its own. He said "my way of thinking with regards to subdomains is that it depends on what you're trying to do. Is it content that's meant to be tightly connected to the main site? Then put it on the main site. If you want the content to stand on its own, then a subdomain is a good match."He also shared that there are technical considerations to think about outside of SEO. He said "There's also the technical side-effect of subdomains sometimes making things a bit more complicated: verification in search console, tracking in analytics, DNS, hosting, security, CSPs, etc."Lastly, John added "To be clear, I think it will affect rankings of the new content, but ultimately it depends on what you want to achieve with it. Sometimes you want something separated out, sometimes you want to see something as a part of the main site. These are different situations, and the results will differ."3. Google: 60% Of The Internet Is Duplicate & Prefers https - Gary Illyes from Google shared during Google Search Central Live in Singapore that 60% of the content on internet is duplicate. To find duplicates, Google compares the checksum generated from the main content and if the checksum matches then the content is duplicate. Lastly, Gary mentioned that Google will always pick a https url over http. Ensure that you have https on your website and focus on producing something way more unique and useful than most of what is out on the internet. 4. Google Re-confirms That E-A-T Applies To Every Single Search Query - During recent SMX Next event, Hyung-Jin Kim, the Vice President of Google Search (who has been working on search quality for the past 20 years and leads up core ranking at Google Search) reconfirmed that E-A-T is used in every single query, it is applied to everything Google Search does. "E-A-T is a core part of our metrics," he added, explaining that it is to "ensure the content that people consume is going to be, is not going to be harmful and it is going to be useful to the user." Here is the transcript of what he said exactly:“E-A-T is a core part of our metrics and it stands for expertise, authoritativeness and trustworthiness. This has not always been there in Google, and it is something we have developed about 10 to 12 to 13 years ago. And it is really there to make sure that, along the lines of what we talked about earlier, that is it really there to ensure the content that people consume is going to be, is not going to be harmful and it is going to be useful to the user. These are principles we live by every single day. And E-A-T, that template, of how we rate an individual site based on expertise, authoritativeness and trustworthiness, we do it to every single query and every single result. So it is actually pretty pervasive throughout everything we do. I will say that YMYL queries, the your money or your life queries, such as when I am looking for a mortgage or when I am looking for the local ER, those we have a particular eye on and pay a bit more attention to those queries because those are some of the most important decisions people can make, some of the most important decisions people will make in their lives. So I will say that E-A-T is has a bit more of an impact there but again, I will say that E-A-T applies to everything, every single query that we have.”5. Google Publishes A Guide To Current & Retired Ranking Systems - You can find out which algorithms Google uses to rank search results and which ones are no longer in use with the help of a new guide to Google's ranking systems. Furthermore, Google distinguishes between ranking "systems" and ranking "updates" in its most recent guide, using new terminology. RankBrain is one example of a system that is always operating in the background. On the other hand, an update describes a one-time adjustment to ranking structures.For instance, when Google returns search results, the helpful content system is always active in the background, while it is subject to modifications to enhance its performance. Other examples of one-time adjustments to ranking algorithms include spam updates and updates to the core algorithm.Here is the list, in alphabetical order, of Google's ranking systems that are currently operational. BERT: Short for Bidirectional Encoder Representations from Transformers, BERT allows Googe to understand how combinations of words can express different meanings and intent. Crisis information systems: Google has systems in place to provide specific sets of information during times of crisis, such as SOS alerts when searching for natural disasters. Deduplication systems: Google's search systems aim to avoid serving duplicate or near-duplicate webpages. Exact match domain system: A system that ensures Google doesn't give too much credit to websites with domain names that exactly match a query. Freshness systems: A system designed to show fresher content for queries where it would be expected Helpful content system: A system designed to better ensure people see original, helpful content, rather than content made primarily to gain search engine traffic. Link analysis systems and PageRank: Systems that determine what pages are about and which might be most helpful in response to a query based on how pages link to each other. Local news systems: A system that surfaces local news sources when relevant to the query. MUM: Short for Multitask Unified Model, MUM, is an AI system capable of understanding and generating language. It improves featured snippet callouts and is not used for general ranking. Neural matching: A system that helps Google understand representations of concepts in queries and pages and match them to one another. Original content systems: A system to help ensure Google shows original content prominently in search results, including original reporting, ahead of those who merely cite it. Removal-based demotion systems: Systems that demote websites subject to a high volume of content removal requests. Page experience system: A system that assesses various criteria to determine if a webpage provides a good user experience. Passage ranking system: An AI system Google uses to identify individual sections or “passages” of a web page to understand better how relevant a page is to a search. Product reviews system: A system that rewards high-quality product reviews written by expert authors with insightful analysis and original research. RankBrain: An AI system that helps Google understand how words are related to concepts. Allows Google to return results that don't contain exact words used in a query. Reliable information systems: Google has multiple systems to show reliable information, such as elevating authoritative pages, demoting low-quality content, and rewarding quality journalism. Site diversity system: A system that prevents Google from showing more than two webpage listings from the same site in the top results. Spam detection systems: A system that deals with content and behaviors that violate Google's spam policies.
This week we look at deduplication. We cover the basics, explaining the importance of what it is and how you benefit from it. We then look at the exceptions and nuances you need to consider.
If you're using both browser events from your pixel and server events from the API, you are probably sending the same events for each. This way, the API may pick up some events that the browser pixel does not. Of course, it's then important that Facebook runs a deduplication process to eliminate duplicates and prevent double-counting. Here, I walk through how to view your deduplication rate...
George Crump, Chief Marketing Officer at StorONE discusses the evolution of the Backup and Storage industry, why ransomware protection is key to protecting your data and backup infrastructure, and debunking some of the theories around all-flash backups.
In this show we got everything for you from creepy crawling to blood sucking vampires and and walking zombies. First, we had a few unconfirmed Google search algorithm updates this week, one last weekend...
Marc Crespi, Director of Product Marketing at Exagrid dives into the state of backup appliances, some details on data deduplication, his view of the security landscape, and some of the limitations of storing data in the cloud.
Herausforderung „Hybride Cloud-Infrastruktur“; wo besteht Handlungsbedarf? Ein Gespräch zu Lösungen und Trends Zum Podcast-Inhalt: Der Trend zu Software-Definierten- und hybriden Cloud Infrastrukturen ist einerseits nicht von der Hand zu weisen, wirft aber Fragen auf: Welche Datenmanagement- und Cloud-Strategie ist für mein Unternehmen geeignet, was sind die zentralen Herausforderungen die es zu beachten gilt (Sicherheit, Kosten, Komplexität) und wie positioniert sich ein bislang eher traditionell an on-premise-Speicherlösungen orientierter Anbieter wie NetApp in diesem dynamischen Marktumfeld? Und welche Schwerpunkte im Lösungs-/Projektgeschäft sind dabei für ein IT-Dienstleistungsunternehmen im Bereich der Digitalen Transformation wie Cancom zentral? Diese und weitere Fragen werden (Hörzeit: ca. 26 Min.) im Rahmen einer Q & A in diesem Podcast genauer besprochen. Dazu stehen Ihnen mit Axel Frentzen, NetApp Senior Technical Partner Manager und Daniel Harenkamp, Solution Architect bei Cancom, zwei erfahrene Experten für die Bereiche Backup, Hochverfügbarkeit, Clouddaten- und Speichermanagement zur Verfügung; die Fragen stellt Norbert Deuschle vom Storage Consortium. Im Einzelnen werden im Podcast folgende Themenbereiche behandelt (Auszug): Was sind für ihre Kunden derzeit die größten Herausforderungen im Bereich der Datenverwaltung? Im Zuge der „Cloudifizierung“ stellen Unternehmenskunden Flexibilität, Sicherheit, Vereinfachung und Kostenreduzierung in den Mittelpunkt ihrer strategischen Initiativen zur Daten- und Speicherverwaltung. Was sind die spezifischen NetApp-Mehrwerte in der Cloud? Cloudifizierung und die Herausforderungen im Hinblick auf Komplexität, Datenschutz (Sicherheit) und das Einhalten von Compliance-Richtlinien Sicherheit / Cyberprotection / Ransomware und Backup-Restore. Wo sieht ein Dienstleister wie CANCOM aus Projektsicht hier aktuellen Handlungsbedarf? Die Cloud ist nicht nur ein Storage-Thema, sondern betrifft die Compute- und IT-Infrastrukturseite generell. Im Zusammenhang mit der Digitalisierung werden Anwendungsseitig technologische Entwicklungen wie z.B. API-gesteuerte Lösungen auf Basis von Microservices sowie Container wichtiger (cloud-native-Ansätze). Wie adressieren NetApp und Cancom Lösungsseitig diese Entwicklungstrends? Was sollten Kunden tun, um den für sie jeweils möglichst kosteneffizientesten Weg in die Cloud zu finden? Ist „lift & shift“ eine Lösung? Wie lassen sich die Infrastrukturkosten in der Cloud bestimmen / was kostet mich die Cloud? Welche Kostenoptimierungs-Potentiale lassen sich nutzen? Welche Daten speichert man wo (Handling, Sicherheit, Kosten)? Space Efficency Features wie Deduplication und Compression Data Tiering in günstigeren kalten Speicher (S3 / Blob) Zentrale Datenhaltung – dezentral nur Caching (z.B. GFC) Sicherheit durch Ransomwareschutz via Snapshots, Replikation der Snapshots, Fpolicy / VSCAN etc. Neues S3-Protokoll in ONTAP Storage Anbindung für Kubernetes Platformen (Trident) Applikationskonsistenter Backup Service für Kubernetes-Apps (Astra Control) Firmenübernahmen wie DataSense, Talon, Cloudjumper... FinOps und Security via NetApp Spot, Cloudcheckr usw...
We find that existing language modeling datasets contain many near-duplicate examples and long repetitive substrings. Deduplication allows us to train models that emit memorised text ten times less frequently and require fewer train steps to achieve the same or better accuracy. We release code for reproducing our work and performing dataset deduplication at https://github.com/google-research/ deduplicate-text-datasets. 2021: Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, D. Eck, Chris Callison-Burch, Nicholas Carlini Keywords: NLP, Language https://arxiv.org/pdf/2107.06499v1.pdf
Podcast with Steven Polmans The New Year is a time for new challenges and commitments - and new beginnings. Steven has opened his door of opportunity by announcing that he is now joining Nallian as Chief Customer Officer. Join us as we discuss how, why, when and areas of opportunity that now present themselves. We cover: From a business relationship to project success and mutual objectives engaged to a natural opportunity Now for someone who has lived it, clearly loved it, and now has the opportunity to sell it! - nallian and many products / collaborative applications supporting efficiency and transparency: Slot booking Kiosk self-registration Check-it Airport eco system Enhancing confidence to share data and proving its value Use what you need - data sharing Sharing a single version of truth Easy access to innovation Relevant data Short time to market A new term that I am sure I will use -- Deduplication of information A man who is excited, engaged, and passionate. Reminds me of a great saying -- "If you do what you love, you will never work another day!"
Takin a little drip down to Colorado with some fruit loops.
In this episode of Scaling Postgres, we discuss index deduplication in Postgres 13, fast hierarchical access, more essential monitoring and cloud native Postgres. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://www.cybertec-postgresql.com/en/b-tree-index-deduplication/ https://www.cybertec-postgresql.com/en/postgresql-speeding-up-recursive-queries-and-hierarchic-data/ https://pgdash.io/blog/essential-postgres-monitoring-part2.html https://pgdash.io/blog/essential-postgres-monitoring-part3.html https://www.2ndquadrant.com/en/blog/webinar-cloud-native-bdr-and-postgresql-follow-up/ https://www.highgo.ca/2020/06/01/optimizing-sql-simplifying-queries-with-window-functions/ https://www.percona.com/blog/2020/05/29/removing-postgresql-bottlenecks-caused-by-high-traffic/ https://www.enterprisedb.com/blog/postgresql-wal-write-ahead-logging-management-strategy-tradeoffs https://habr.com/en/company/postgrespro/blog/504498/ https://postgresql.life/post/markus_winand/ https://info.crunchydata.com/blog/spatial-constraints-with-postgis-part-2 https://info.crunchydata.com/blog/spatial-constraints-with-postgis-in-postgresql-part-3
In this week's digital marketing news, Google Search counts snippets as listings, Google helps users keep track of their recent recipe searches, a Google Ads optimization score trick leaves us speechless, learn how blogging has transformed over the past 5 years, Twitter teases a explore page extreme makeover, Spotify becomes a new social network with playlist stories, we mourn Mr. Peanuts, and strangers become acquaintances with the snap of a photo.
Summary:In this 66th episode of Fintech Impact, Jason Pereira, award-winning financial planner, university lecturer, writer, and host conducts a review of his experience at the Salesforce World Tour Conference in Toronto along with his colleague Alex Martin, a financial advisor with IPC Securities, Vice President at Craig & Taylor Associates, and a Founding Partner with Jason Pereira at Finally Technology.Time Stamped Show Notes:● 00:49: – Alex Martin introduces himself● 02:01: – What is Salesforce● 06:13: – Salesforce puts out multiple iterations on a yearly basis● 07:05: – What is the importance of being able to build your own APIs on top ofpre-existing software for businesses● 09:23: – Alex discusses Salesforce’s Customer 360 platform for taking serversinto one place● 12:08: – Deduplication is a problem that people deal with in legacy systems● 13:03: – What is Alex’s take on Salesforce’s Einstein Voice● 16:26: – They discuss the concept of ‘next best action’● 19:07: – What is Salesforce's Quip platform capable of● 21:37: – How is the Salesforce training program Trailhead extremely useful● 26:35: – Salesforce empowers a company’s employees with the ability to self-learn3 Key Points:1. Salesforce is a development platform for everything from data services and A.I.platforms, to sales service, marketing, and commerce.2. Salesforce has different pieces called clouds for things such as service, marketing,and sales.3. Quip is Salesforce’s collaborative platform that is like Slack meets Google Docs.Tweetable Quotes:- “Salesforce was the first CRM (Customer Relations Management software) to be100% developed for the web.’” – Alex Martin.- “We need to pull all this data into one place, or at least have one central piece that isgoing to be able to mine all this data and talk to it and understand it.” – Alex Martin.- “The difference in Einstein (Voice) from other A.I.’s is the fact that it is reallysupposed to be the piece that makes your data actionable.” – Alex Martin.Resources Mentioned:● Facebook – @Fintech_Impact● LinkedIn – Jason Pereira● LinkedIn - Alex Martin See acast.com/privacy for privacy and opt-out information.
Jerome Wendt is the founder of DCIG. DCIG produces buyers guides for different product categories and in this podcast I talked to Jerome about the process for the DCIG buyer's guides and specifically about the Enterprise Deduplication Backup Target Appliance buyer's guide. HPE StoreOnce earned the recommended rating and Simon Watkins from the Data Protection team joined us to discuss it.
In this Voice of Veritas podcast episode, we’re digging into the truth in Information. Roger Stein, Solutions Marketing, Veritas, interviews Mike Walten, Product Management, Veritas, in a discussion on Veritas Appliances. Enterprises are doing numerous concurrent backups with high deduplication rates and with the data explosion they need to scale to petabyte capacities. “Scale” is the keyword because not everyone needs the two Petabytes available on the 5340 appliance. What do you really want most out of your appliance? Tune in to hear the key capabilities of the NetBackup 5340 Appliance such as: Optimized storage Capacity Scalable from 120TB to almost 2 PB Cost Savings Reduced Downtime Enhancing your quality of your worklife AND MORE! To learn more about the NetBackup 5340 and all NetBackup Appliances, click here. See omnystudio.com/listener for privacy information.
In this Voice of Veritas podcast episode, we’re digging into the truth in Information. Roger Stein, Solutions Marketing, Veritas, interviews Paul Mayer, Product Management, Veritas, in a discussion on NetBackup CloudCatalyst. Since the announcement of Netbackup CloudCatalyst, it is now available as a physical appliance, virtual appliance and as a container on the Flex Appliance. Customers are increasingly looking to take advantage of lower costs of object storage in the cloud. Moving deduplicated data to object storage in the cloud will reduce costs even further. Often deduplicated data must be rehydrated and deduplicated a second time before it can be moved to object storage in the cloud. NetBackup CloudCatalyst eliminates this second step by moving deduplicated data directly from NetBackup to object storage in the cloud, saving time and ultimately cost. Tune in for the full episode and learn more about what NetBackup CloudCatalystcan do for your organization. See omnystudio.com/listener for privacy information.
Demo-rich overview of major advances coming to Windows Server 2019 including: managing hyper-converged infrastructure; support for storage class memory; the new Storage Migration Service; deduplication coming to ReFS; new Hybrid integration between Windows Server datacenter and Azure with the Windows Admin Center and more. Session THR2320 - Filmed Monday, September 24, 15:25 EDT at Microsoft Ignite in Orlando, Florida. Subject Matter Expert: Jeff Woolsey is a Principal Program Manager at Microsoft for Windows Server and a leading expert on Virtualization, Private and Hybrid Cloud. Jeff has worked on virtualization and server technology for over eighteen years. He plays a leading role in the Windows Server Engineering team, helping to shape the design requirements for Windows Server 2019, 2016, 2012 / R2, 2008 R2, Microsoft Hyper-V Server 2019, 2016, 2012/R2, 2008 R2, System Center 2012 / R2, and Windows Admin Center. Jeff is an authority on ways customers can benefit from virtualization, management, and cloud computing across public, private, and hybrid environments. Jeff is a sought-after hybrid cloud expert presenting in numerous keynotes and conferences worldwide with Bill Gates, Steve Ballmer, & Satya Nadella.
Jedidiah Yueh - Founder / Executive Chairman of Delphix discusses how Avamar deduplication technology was developed into one of the hottest technologies in the data storage and backup industry.
Podcast – StorageSwiss.com – The Home of Storage Switzerland
IT has a lot of questions about archiving. Is it really worth it? What should I use for archive storage? Will users embrace it? In our latest video podcast, Storage Switzerland Senior Analyst George Crump and I sit down with…Read more ›
Podcast – StorageSwiss.com – The Home of Storage Switzerland
Archiving is a lot more than buying an object storage system and putting old files on it. There are a lot of things to consider before creating an archiving strategy. How do you archive? What are the best practices? Do…Read more ›
Podcast – StorageSwiss.com – The Home of Storage Switzerland
First it was Ransomware. Now it’s two epic Hurricanes. What’s Next? Listen to our Podcast as Infrascale’s Director of Marketing, Carla Fedrigo joins Storage Switzerland’s Founder and Lead Analyst George Crump on a new podcast, “The State of Data Center…Read more ›
Duplicate records and their variations are the banes of most companies' existence. They gum up aggregations in reports and create confusion inside organizations. See how Intricity uses some guided machine learning to get around these issues. Watch the Video on YouTube Related Whitepaper: The Intricity Data Deduping Factory (DDF) Talk with a Specialist: intricity.com/intricity101 www.intricity.com youtube.com/intricity101
Podcast – StorageSwiss.com – The Home of Storage Switzerland
There are a lot of options available to IT professionals looking to improve their ability to recover from a disaster. One is copy data management, very popular in the press. The other is DR Ready Primary Storage, something that Storage…Read more ›
Phill Gilbert from the 3PAR product management team joins me to discuss 3PAR Adaptive Data Reduction, announced with the 3PAR OS 3.3.1. Adaptive Data Reduction includes Zero Detect, Deduplication, new Compression, and new Data Packing.
Podcast – StorageSwiss.com – The Home of Storage Switzerland
Disaster Recovery as a Service (DRaaS) is a recovery option that is getting a lot of attention right now. In this live podcast, Storage Switzerland and Carbonite cover exactly what DRaaS is and whether or not your organization should consider…Read more ›
Don’t miss out! Sign up for Angular Remote Conf! 02:28 - Forrest Norvell Introduction Twitter GitHub 02:37 - Rebecca Turner Introduction Twitter GitHub Blog 03:05 - Why npm 3 Exists and Changes in npm 2 => 3 Debugging Life Cycle Ordering Deduplication 08:36 - Housekeeping 09:47 - Peer Dependency Changes The Singleton Pattern 15:38 - The Rewrite Process and How That Enabled Some of the Changes Coming Out CJ Silverio: Npm registry deep dive @ Oneshot Oslo 22:50 - shrinkwrapping 27:00 - Other Breaking Changes? Permissions 30:40 - Tiny Jewels 33:24 - Why Rewrite? 36:00 - npm’s Focus on the Front End Bower npm Roadmap 42:04 - Transitioning to npm 3 42:54 - Installing npm 3 44:11 - Packaging with io.js and Node.js 45:16 - Being in Beta Picks Slack List (Aimee) Perceived Performance Fluent Conf Talks (Aimee) Paul Irish: How Users Perceive the Speed of The Web Keynote @ Fluent 2015 (Aimee) Subsistence Farming (AJ) Developer On Fire Episode 017 - Charles Max Wood - Get Involved and Try New Things (Chuck) Elevator Saga (Chuck) BrazilJS (Forrest) NodeConf Brazil (Forrest) For quick testing: `npm init -y`, configure init (Forrest) Where Can I Put Your Cheese? (Or What to Expect From npm@3) @ Boston Ember, May 2015 (Rebecca) Open Source & Feelings Conference (Rebecca) bugs [npm Documentation] (Rebecca) docs [npm Documentation] (Rebecca) repo [npm Documentation] (Rebecca)
Don’t miss out! Sign up for Angular Remote Conf! 02:28 - Forrest Norvell Introduction Twitter GitHub 02:37 - Rebecca Turner Introduction Twitter GitHub Blog 03:05 - Why npm 3 Exists and Changes in npm 2 => 3 Debugging Life Cycle Ordering Deduplication 08:36 - Housekeeping 09:47 - Peer Dependency Changes The Singleton Pattern 15:38 - The Rewrite Process and How That Enabled Some of the Changes Coming Out CJ Silverio: Npm registry deep dive @ Oneshot Oslo 22:50 - shrinkwrapping 27:00 - Other Breaking Changes? Permissions 30:40 - Tiny Jewels 33:24 - Why Rewrite? 36:00 - npm’s Focus on the Front End Bower npm Roadmap 42:04 - Transitioning to npm 3 42:54 - Installing npm 3 44:11 - Packaging with io.js and Node.js 45:16 - Being in Beta Picks Slack List (Aimee) Perceived Performance Fluent Conf Talks (Aimee) Paul Irish: How Users Perceive the Speed of The Web Keynote @ Fluent 2015 (Aimee) Subsistence Farming (AJ) Developer On Fire Episode 017 - Charles Max Wood - Get Involved and Try New Things (Chuck) Elevator Saga (Chuck) BrazilJS (Forrest) NodeConf Brazil (Forrest) For quick testing: `npm init -y`, configure init (Forrest) Where Can I Put Your Cheese? (Or What to Expect From npm@3) @ Boston Ember, May 2015 (Rebecca) Open Source & Feelings Conference (Rebecca) bugs [npm Documentation] (Rebecca) docs [npm Documentation] (Rebecca) repo [npm Documentation] (Rebecca)
Don’t miss out! Sign up for Angular Remote Conf! 02:28 - Forrest Norvell Introduction Twitter GitHub 02:37 - Rebecca Turner Introduction Twitter GitHub Blog 03:05 - Why npm 3 Exists and Changes in npm 2 => 3 Debugging Life Cycle Ordering Deduplication 08:36 - Housekeeping 09:47 - Peer Dependency Changes The Singleton Pattern 15:38 - The Rewrite Process and How That Enabled Some of the Changes Coming Out CJ Silverio: Npm registry deep dive @ Oneshot Oslo 22:50 - shrinkwrapping 27:00 - Other Breaking Changes? Permissions 30:40 - Tiny Jewels 33:24 - Why Rewrite? 36:00 - npm’s Focus on the Front End Bower npm Roadmap 42:04 - Transitioning to npm 3 42:54 - Installing npm 3 44:11 - Packaging with io.js and Node.js 45:16 - Being in Beta Picks Slack List (Aimee) Perceived Performance Fluent Conf Talks (Aimee) Paul Irish: How Users Perceive the Speed of The Web Keynote @ Fluent 2015 (Aimee) Subsistence Farming (AJ) Developer On Fire Episode 017 - Charles Max Wood - Get Involved and Try New Things (Chuck) Elevator Saga (Chuck) BrazilJS (Forrest) NodeConf Brazil (Forrest) For quick testing: `npm init -y`, configure init (Forrest) Where Can I Put Your Cheese? (Or What to Expect From npm@3) @ Boston Ember, May 2015 (Rebecca) Open Source & Feelings Conference (Rebecca) bugs [npm Documentation] (Rebecca) docs [npm Documentation] (Rebecca) repo [npm Documentation] (Rebecca)
Nutanix NEXT Community Podcast Episode 13 with your Hosts Laura Whalen, John Troyer, Dwayne Lessner and Angelo Luciani. This week, we chat with Kannan Muthukkaruppan, a Nutanix Engineer, and take a deep dive on Adaptive vs. Inline/Always-on Deduplication.
In some cases, organizations have unique file naming conventions, but file names are often created by people, which more often yield not-so-unique file names. One person names a file one way and another person names the exact same file another way because they use it differently or in a different place. While this demonstrates a clear lack of consistency and governance, it happens way too often. This is especially true if you not using a DAM solution with clear guidelines and stop gaps to catch these sort of things as part of a workflow. So here is the dilemma. What do you do with exact duplicates? How do you even find exact duplicates? #AnotherDamBlog #AnotherDamPodcast #assets #DAM #DigitalAssetManagement #hash #HenrikDeGyor #Linkedin #podcast #CRC32 #dedupe #deduping #deduplication #DuplicateReduction #FileNaming #MD5 #SHA256 #SHA512 #checksum #octillion Questions? Email them to anotherdamblog@gmail.com
Dan Morris, Senior Systems Engineer at WhiteWater West, shares his experience with NetApp and Microsoft Hyper-V. Topics include virtualizing Exchange to dramatically improve email reliability, using deduplication to reclaim space, and simplifying backup and recovery with NetApp.
Hear how NetApp and VMware support Zuckerman Spaeder’s mission-critical Microsoft Applications
Hear Robert Ross discuss how NetApp and VMware to power DCI’s ITaaS business.
Learn about NetApp's storage efficiency technologies from Larry Freeman
NetApp experts discuss best practices and current myths around deduplication and heterogeneous storage environments.
Krish Padmanabhan discusses the incremental value deduplication offers in both disk and tape environments.
Larry Freeman discusses how to evaluate the right deduplication solution for your heterogeneous environment, including how deduplication differs between vendors and what to expect in terms of cost benefits.
Manish Goel, SVP and GM of the Data Protection and Retention business unit at NetApp, discusses virtualization's effects on storage resources and how data protection, including deduplication, can add greater value to virtualized environments as well as drawbacks and best practices for deployments.
Ravi Thota, director of Data and Storage Management at NetApp, discusses various deduplication options and recommendations for choosing the right solution for your environment.