POPULARITY
Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX. Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today's episode, we're going to focus on integrating APEX with OCI AI Services. Lois: That's right, Niki. We're going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we've got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today's episode, we'll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let's start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app. You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let's shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it's all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started. 04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework? Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems. Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That's fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we're waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let's talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps? Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok. What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location. Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let's switch to the big topic everyone's excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful? Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps? Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database. 17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability. 17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question. Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc. 19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you're ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You'll find step-by-step guides and demos for everything we covered today. Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode of Generation AI, hosts Dr. JC Bonilla and Ardis Kadiu examine how AI can support—not replace—human decision-makers in college admissions. They discuss the growing challenge of application volume (with schools like NYU receiving 120,000 applications) against limited staff resources, creating bottlenecks where reviewers must evaluate hundreds of applications in tight timeframes. The hosts explain how AI tools can bring consistency to evaluation, remove human bias, and dramatically speed up document processing while maintaining quality. They outline specific AI applications in admissions including document verification, essay evaluation, and transcript analysis, emphasizing that the goal is to enhance human judgment rather than replace it.The Admissions Crunch: Record Application Numbers (00:00:57)NYU received 120,000 applications this year, the most for any private universityUniversity of Texas at Austin saw a 24% increase in applicationsUniversity of Washington experienced 57% growth over five yearsHarvard received 60,000 applications for just 2,000 spotsThe Human Workload Problem (00:02:46)Average public university has 60 new students per staff memberApplication-to-staff ratio can reach 600-650 for selective schoolsEach application takes 15-30 minutes to reviewStaff often have multiple responsibilities beyond application reviewThe Student Experience Impact (00:06:08)Long waiting times affect student decision-makingSpeed to response is critical for enrollment strategyReducing review time from 45 days to 24 hours can increase enrollmentEarly decisions are a competitive advantage for institutionsHow College Admissions Work (00:08:12)Holistic admissions evaluate academic achievement, personal statements, extracurricularsRolling admissions focus on meeting basic criteria with rapid turnaroundBoth processes involve significant manual reviewConsistency and speed are major challengesAI Advantages in the Admissions Process (00:11:24)Consistency and fairness through uniform evaluation standardsAI applies the same criteria consistently across all applicationsRemoves human bias that can affect evaluation outcomesMaintains evaluation standards even with reviewer fatigueAI Technology in Document Processing (00:14:54)Optical Character Recognition (OCR) digitizes documents efficientlyNatural Language Processing (NLP) analyzes qualitative aspects of applicationsMultimodal models like Gemini 2.0 extract data from documents accuratelyReasoning models can evaluate applications based on specific criteriaCurrent State of AI in Admissions (00:17:53)OCR technology for transcript scanning exists but is limitedSome schools use data extraction when loading information into CRMsFew comprehensive solutions integrate all aspects of AI evaluationTechnology is just becoming available for full-scale implementationTechnical Applications of AI in Admissions (00:19:17)Document verification automates checking for completeness and accuracyEssay evaluation can extract themes about student readinessCourse equivalency mapping connects courses across institutionsTranscript evaluation helps with transfer credits and placementAdmissions as a Rubric (00:25:37)Admissions decisions are based on measurable criteriaCriteria can be objective (in-state status) or subjective (leadership qualities)Digital application review transforms criteria into software evaluationsAI can work effectively with rubric-based evaluation systemsElement451's AI-Powered Admissions Approach (00:29:08)AI agent ingests institutional rubrics and evaluation criteriaSystem analyzes all submitted documents through data extractionProvides scores and reasoning for each evaluation criterionPlaces AI recommendation alongside human evaluationThe Future: Your First Read is Your Second Read (00:31:08)AI provides efficient, fair, consistent and scalable admissions processGoal is to enhance human judgment, not replace itRemoves human bias while maintaining quality evaluationFrees staff time for personal interaction with prospective students - - - -Connect With Our Co-Hosts:Ardis Kadiuhttps://www.linkedin.com/in/ardis/https://twitter.com/ardisDr. JC Bonillahttps://www.linkedin.com/in/jcbonilla/https://twitter.com/jbonillxAbout The Enrollify Podcast Network:Generation AI is a part of the Enrollify Podcast Network. If you like this podcast, chances are you'll like other Enrollify shows too! Enrollify is made possible by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com. Attend the 2025 Engage Summit! The Engage Summit is the premier conference for forward-thinking leaders and practitioners dedicated to exploring the transformative power of AI in education. Explore the strategies and tools to step into the next generation of student engagement, supercharged by AI. You'll leave ready to deliver the most personalized digital engagement experience every step of the way.Register now to secure your spot in Charlotte, NC, on June 24-25, 2025! Early bird registration ends February 1st -- https://engage.element451.com/register
* Chinese AI App DeepSeek Banned From Australian Government Devices* OpenAI Data Breach Alleged: 20 Million Logins Reportedly Stolen* Apple Removes Apps Infected with "SparkCat" Malware* Australian Healthcare Sector Hardest Hit by Cyberattacks: Report* Securing the No-Code SDLC: A New Approach NeededChinese AI App DeepSeek Banned From Australian Government Deviceshttps://www.sbs.com.au/news/article/chinese-ai-app-deepseek-banned-on-all-australian-government-devices/lm9udv4etThe Australian government has banned the use of the Chinese AI chatbot DeepSeek on all government-issued devices, citing national security concerns. This decision, effective immediately, follows warnings from intelligence agencies about the potential risks associated with the app.The ban comes amidst growing global concerns about the security and privacy implications of AI technologies developed in China.While the ban applies only to government entities, the government has urged Australians to be mindful of how their data is used online. This move follows a similar ban on the Chinese social media app TikTok earlier this year.DeepSeek's rapid rise to prominence has sparked a global debate about the future of AI development and the potential for geopolitical competition in this emerging field.OpenAI Data Breach Alleged: 20 Million Logins Reportedly Stolenhttps://gbhackers.com/openai-data-breach/A concerning claim has emerged on dark web forums, alleging the theft and subsequent sale of over 20 million OpenAI user login credentials.The anonymous threat actor, who posted the claim, is offering the credentials for sale, raising serious concerns about the security of OpenAI's user data.While the authenticity of this claim remains unconfirmed, the potential impact of such a breach is significant. OpenAI accounts are often used for critical tasks, including academic research, professional projects, and sensitive content generation.OpenAI has not yet publicly addressed these claims. However, users are advised to take immediate precautions, such as changing passwords and enabling two-factor authentication, to protect their accounts.This incident serves as a stark reminder of the ever-evolving cyber threat landscape and the importance of robust security measures for all online platforms, especially those handling sensitive user data.Apple Removes Apps Infected with "SparkCat" Malwarehttps://www.macrumors.com/2025/02/06/apple-removed-screen-reading-malware-apps/Apple has removed 11 iOS apps from the App Store after they were found to contain malicious code designed to steal sensitive information from users' devices.Security firm Kaspersky discovered the malware, dubbed "SparkCat," which utilizes Optical Character Recognition (OCR) to scan user photos for sensitive data, such as cryptocurrency recovery phrases.The malware targeted users in Europe and Asia, attempting to gain access to user photos and extract valuable information.Apple also identified an additional 89 apps that had previously been rejected or removed from the App Store due to fraud concerns and found to contain similar malicious code.This incident serves as a reminder for users to be cautious when downloading and installing apps from the App Store, particularly those from unknown developers. Apple recommends utilizing the App Privacy Report feature within the Settings app to monitor app access to sensitive data and avoid granting unnecessary permissions.By taking these precautions and exercising caution when downloading apps, users can significantly reduce their risk of exposure to malware and other malicious threats.Australian Healthcare Sector Hardest Hit by Cyberattackshttps://cybercx.com.au/resource/dfir-threat-report-2025/https://www.smh.com.au/technology/healthcare-and-finance-the-hardest-hit-by-cyberattacks-20250205-p5l9ns.htmlThe Australian healthcare sector faced the brunt of cyberattacks in the past year, according to a new report from cybersecurity firm CyberCX.The report revealed that healthcare accounted for 17% of all cyberattacks in Australia, followed by the financial services sector at 11%. The 2024 MediSecure data breach, impacting over 12 million Australians, stands as a stark reminder of the severity of these attacks.The report highlights a concerning trend: a significant increase in the time it takes to detect cyber espionage incidents, now averaging over 400 days. This suggests that attackers are becoming more sophisticated and persistent, operating within networks for extended periods.The report also emphasizes the growing prevalence of financially motivated attacks, with 65% of incidents driven by financial gain.These findings underscore the critical need for enhanced cybersecurity measures across all sectors, particularly in healthcare and finance where sensitive data is highly valuable.Securing the No-Code SDLC: A New Approach Neededhttps://www.forbes.com/councils/forbestechcouncil/2025/02/10/securing-the-sdlc-for-no-code-environments/Traditional software development relies heavily on a structured SDLC (Software Development Lifecycle) with security baked in at every stage. However, the rise of no-code development platforms has disrupted this model, presenting unique challenges for security teams.No-code platforms, which empower citizen developers to create applications with minimal coding, often bypass crucial SDLC stages like planning, analysis, and design. This lack of structured oversight can lead to critical security vulnerabilities.Traditional security measures, such as threat modeling and secure coding practices, are often impractical or inapplicable in the no-code environment.To effectively secure no-code development, organizations must adapt their approach. This involves:* Focusing on later stages: Shifting the focus towards later stages of the SDLC, such as implementation, testing, and maintenance, where security measures can be most effectively applied.* Implementing real-time security detection: Integrating automated tools that can detect vulnerabilities in real-time within the no-code platform itself.* Establishing robust testing and deployment policies: Mandating rigorous testing procedures and enforcing strict security checks before applications are deployed to production environments.* Leveraging platform-level security: Advocating for no-code platforms to incorporate built-in security features, such as pre-configured secure connectors and automated compliance checks.By adapting their approach and focusing on these key areas, organizations can empower citizen developers to innovate while ensuring the security and integrity of their no-code applications.Special Thanks to Bradley Busch for contributing some of the interesting stories for this week's cyber bites. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit edwinkwan.substack.com
This is a conversation we had with Max and Jo from The New Frontier Podcast, but we felt it needed to be more widely available. In this episode, we dive deep into Amazon's RUFUS Blueprint—a game-changing patent that most sellers know nothing about. We break down exactly how RUFUS works, from noun phrase optimisation to visual label tagging and click training data, and how sellers can future-proof their listings. If you're serious about staying ahead in e-commerce, this is a must-listen. Host: Joanna Lambadjieva Guests: Danny McMillan, Oana (AI A9 Algorithm Specialist), and Andrew (Custom GPT Expert) Overview of the Episode In this captivating episode of the New Frontier Podcast, the discussion dives deep into the revolutionary RUFUS Blueprint, a groundbreaking Amazon patent that promises to reshape the way sellers optimise their product listings and leverage AI for superior e-commerce results. Hosted by Joanna Lambadjieva, with insightful contributions from Danny McMillan, Oana, and Andrew, this episode is a must-listen for e-commerce professionals looking to stay ahead of the curve. Highlights from the Conversation What is RUFUS? The team explores Amazon's RUFUS patent and its impact on optimising e-commerce experiences: Multimodal Capabilities: RUFUS processes text, visuals, and video, making it a versatile tool for personalised product recommendations. Learning Through Interaction: RUFUS evolves through click training data, analysing user behaviours across product categories, and refining recommendations over time. Key Insights and Strategies Noun Phrase Optimisation (NPO) Andrew explains how NPO is a game-changer: Traditional keywords like "lamp" or "chair" are insufficient. Instead, RUFUS thrives on semantic-rich noun phrases such as "hand-carved mahogany bookshelf" or "stainless steel pourover coffee maker". Building detailed noun stacks improves product discoverability by aligning with natural language queries. Actionable Tip: Sellers should integrate descriptive modifiers (materials, styles, purposes) into product titles, bullet points, and descriptions. Click Training Data Anana breaks down how RUFUS uses click behaviour: Every user interaction feeds into the system, refining future recommendations. Examples include linking features like "cushioning" with products like shoes for flat feet, evolving customer queries to reflect trends in user preferences. Takeaway: The dynamic nature of RUFUS queries underscores the importance of continuously updating product listings. Visual Label Tagging Andrew introduces the concept of visual label tagging: RUFUS uses Optical Character Recognition (OCR) to interpret text on images. Adding descriptive text to images (e.g., “portable and lightweight” or “ergonomic grip”) ensures RUFUS extracts relevant data. Pro Tip: Go beyond alt text—ensure images include textual overlays with key product features. Q&A Enhancements The patent's emphasis on Question-Answer (Q&A) optimisation offers sellers a new framework: Map out customer questions and create clusters by topic. Incorporate strong noun phrases into answers, aligning them with RUFUS's ability to infer and rank relevant responses. Example Strategy: In product descriptions, frame benefits around likely customer questions. For earbuds, structure the copy like this: Question: Are they comfortable for workouts? Answer: Designed with a secure adjustable hook for stability during intense exercise. Myths Dispelled and Real-World Applications Danny elaborates on common misconceptions about RUFUS: "Keyword stuffing" is obsolete; instead, sellers must focus on semantic relevance and contextual accuracy. Dynamic content and attribute enrichment are critical to staying competitive. Best Practice: Use RUFUS optimisations on underperforming products to safely experiment without risking top-selling listings. Final Thoughts The team concludes with a forward-looking perspective: The integration of conversational, semantic, and visual optimisation signals the end of static, keyword-stuffed e-commerce. RUFUS represents a paradigm shift where sellers must think contextually, conversationally, and inferentially. Key Message: Future-proof your Amazon strategy by embracing AI-driven tools like RUFUS to stay ahead in the evolving e-commerce landscape. Resources Mentioned Connect with Joanna Lambadjieva: LinkedIn Learn more about RUFUS and related tools via the Seller Sessions Live blog. Grab Tickets: Seller Sessions Live is Back May 10 – https://sellersessions.com/seller-sessions-live-2025/ New Frontier podcast on Spotify or Apple and Ecomtent Jo: https://www.linkedin.com/in/joannalambadjieva/ Max: https://www.linkedin.com/in/max-sinclair-ai/ Oana: https://www.linkedin.com/in/oana-padurariu-b8b1b51a2/ Andrew: https://www.linkedin.com/in/andrew-bell-540403275/ Check out Rufus - The Blueprint https://sellersessions.com/rufus-the-blueprint/
Testing 100 Amazon Product Listings with Rufus: My Findings Capabilities of Rufus on a Product Detail Page with Andrew In this episode, Andrew, a former Director of Amazon for Touch of Class and current Amazon Lead for the National Fire Protection Association, dives into the powerful features of Rufus and how it transforms the way customers interact with product detail pages. Andrew's Background: Former Director of Amazon for a luxury home brand, Touch of Class (8 eight figure brand) Created top-rated Amazon Custom GPTs Amazon Lead at the National Fire Protection Association Self-taught in SEO, SGE, and Generative AI applications Holds a black belt in traditional Taekwondo and enjoys pickleball Rufus' Core Capability: Text Retrieval Rufus uses Optical Character Recognition (OCR) to extract text from product information, customer reviews, and visuals. This technology allows for a comprehensive data analysis that can enhance the accuracy of product details and reviews. Rufus in Action: Extracts relevant insights from text, images, and customer feedback Moves beyond basic search terms, offering a more intuitive search experience for users Delivers highly relevant product information by utilizing advanced AI techniques Conclusion: Andrew explains how Rufus represents the future of product search and engagement, making customer interactions with product detail pages more insightful, efficient, and responsive to user needs. Watch the full Version on Youtube
In this episode of Tech Talks Daily, I sit down with Lisen Kaci, co-founder of Discrepancy AI, a company that's pushing the boundaries of AI-powered document analysis. Lisen shares how Discrepancy AI transforms the way businesses handle unstructured documents like PDFs, invoices, financial records, and even heavily photocopied images. Through advanced AI, Discrepancy AI converts these into structured, searchable data, unlocking insights and improving workflows for legal firms, financial institutions, and beyond. We explore the fascinating origins of Discrepancy AI and how it addresses the limitations of traditional Optical Character Recognition (OCR) technology, which hasn't evolved much since 1974. Lisen details how their AI goes beyond simple text extraction, working with complex formats like charts, graphs, and tables, and even analyzing pixel-level data to detect signs of document tampering or fraud. This capability is transforming the way industries approach document integrity and security. Lisen also dives into the specific challenges legal firms face when processing massive volumes of documents, from contracts to tax filings. He discusses how AI can automate much of the tedious work, freeing up legal professionals to focus on higher-level tasks. With law firms increasingly interested in becoming AI-powered, Lisen shares his thoughts on how the industry is evolving toward more technology-driven practices. We also touch on privacy concerns and how Discrepancy AI ensures that data is handled securely, without training on customer information. Looking ahead, Lisen predicts a future where law firms will fully integrate AI solutions, allowing them to offer faster, more accurate services to their clients. Could AI truly revolutionize the legal industry, making processes more efficient and secure? Join us as we dive deep into the future of AI in legal tech with Lisen Kaci.
Co-hosts Mark Thompson and Steve Little explore the potential benefits of Meta's open-source approach to AI. Next, they discuss MyHeritage's plans to retire an AI feature. Then, they review the AI image generation features recently added to Adobe Illustrator. In this week's Tip of the Week, they share valuable insights on crafting effective, hallucination-resistant, AI prompts using the “role, task, and format” prompting method.The RapidFire segment covers Apple's AI delays, Google's impressive math achievements, Reddit's web crawling restrictions, OpenAI's venture into AI-based search, and Meta's groundbreaking image recognition advancements.This episode offers a perfect blend of practical applications and future possibilities, making it essential listening for genealogists navigating the AI revolution. Whether you're a tech enthusiast or a family history buff, this show provides the knowledge you need to stay ahead in the rapidly changing world of AI.Timestamps:## In the News03:18 Meta's Impact on Corporate Genealogy: Discussion of Meta.ai's release and its implications07:54 MyHeritage's AI Feature Removal: Retirement of AI Record Finder tool10:11 Adobe's AI Integration: New AI features in Adobe Illustrator ## Tip of the Week19:32 Building Better Prompts: Role, Task, and Format: Explanation and examples of this prompting technique ## RapidFire Topics27:55 Apple's AI Delays: Postponement of Apple Intelligence Tools33:18 Google's AI Math Achievement: AI Performance in Math Olympics36:55 Reddit's Web Crawling Restrictions: Implications for AI training data41:22 OpenAI's SearchGPT Development: Potential impact on search engines and competitors45:30 Meta's SAM 2 Release: Advancements in image and video recognition Resource Links:Adobe Acrobat: https://acrobat.adobe.com/Adobe Illustrator: https://www.adobe.com/products/illustrator.htmlAdobe Lightroom: https://www.adobe.com/products/photoshop-lightroom.htmlAdobe Photoshop: https://www.adobe.com/products/photoshop.htmlAirtable: https://www.airtable.com/Apple Intelligence: https://www.apple.com/apple-intelligence/Canva: https://www.canva.com/ChatGPT: https://chat.openai.com/Claude (Anthropic): https://claude.ai/FamilySearch: https://www.familysearch.org/Gemini (Google): https://deepmind.google/technologies/gemini/Google AI: https://ai.google/Google Docs: https://docs.google.com/Meta AI: https://ai.meta.com/Microsoft Copilot: https://copilot.microsoft.com/MyHeritage: https://www.myheritage.com/OpenAI: https://openai.com/Perplexity: https://www.perplexity.ai/Reddit: https://www.reddit.com/SearchGPT: https://openai.com/index/searchgpt-prototype/Segment Anything Model (SAM) by Meta: https://ai.meta.com/sam2/Tags: AI in Genealogy, Meta AI, MyHeritage, Adobe Illustrator, AI Prompts, Open Source AI, Apple Intelligence, Google AI, Reddit, OpenAI, SearchGPT, SAM 2, Image Recognition, Family History, Genealogy Tools, AI Ethics, Transcription, Optical Character Recognition (OCR), Handwritten Text Recognition (HTR), Perplexity, Large Language Models, AI-Enabled Search, Artificial Intelligence, Genealogical Research, AI Record Finder, Prompting Techniques, AI Integration, Web Crawling, Data Privacy, Machine Learning, Computer Vision, AI in Adobe Products, AI Math Capabilities, Digital Genealogy, AI-Powered Tools, Genealogy Software, AI Advancements, Family Tree Research, AI for Historians, Future of Genealogy
In this episode of Talking HealthTech, host Peter Birch sits down with Vikram Palit, CEO and Founder of ConsultMed. Vikram discusses his background as a paediatrician and his journey into technology, as well as the founding and purpose of ConsultMed. The conversation explores the challenges of paper-based referrals and documents in healthcare, and how ConsultMed aims to digitise and streamline the referral process.Key Takeaways:Digital transformation in healthcare is no longer optional; it's a strategic necessity to improve efficiency and data security.ConsultMed integrates digital referrals and workflow automation, aiming to replace antiquated paper-based systems.AI technologies like Optical Character Recognition (OCR) significantly streamline the referral process by enabling direct-to-EMR capabilities.Digital referrals offer improved speed and traceability, bolstering accountability and reducing the risk of data mismanagement.Advanced AI can assist in document summarisation, alleviating the administrative burden on healthcare professionals.Compliance with data security and privacy regulations is paramount when implementing digital and AI-based systems.Overcoming challenges in digital transformation requires an overhaul of ingrained practices and strong buy-in from healthcare professionals.Integration of AI and digital referrals with existing EMR systems is a focal point for bfuture developments.Workforce implications, such as reducing burnout, are significant benefits of adopting automated systems.As healthcare needs grow, the role of digital and AI technologies will become increasingly integral in delivering efficient and high-quality care.Timestamps:[00:00] - Intro[00:14] - Meet Vikram[00:38] - ConsultMed Intro[01:26] - Paperwork Woes[03:06] - Current Flow[03:54] - Fax Issues[05:53] - Reception Role[06:17] - Data Entry[08:12] - AI Cases[13:35] - Data Security[15:32] - Future Path[16:40] - New Partners[17:47] - Wrap UpCheck out the episode and full show notes on the Talking HealthTech website.Loving the show? Leave us a review, and share it with someone who might get some value from it.Keen to take your healthtech to the next level? Become a THT+ Member for access to our online community forum, quarterly summits and more exclusive content. For more information visit talkinghealthtech.com/thtplus
Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that performs an Optical Character Recognition (OCR) task for processing scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific documents. The proposed approach offers a promising solution to enhance the accessibility of scientific knowledge in the digital age, by bridging the gap between human-readable documents and machine-readable text. We release the models and code to accelerate future work on scientific text recognition. 2023: Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic https://arxiv.org/pdf/2308.13418v1.pdf
Natalia A. Gonzalez, PhD, Infinx Chief Data Scientist, shares insights from our research recently presented at the IEEE ICHI conference, demonstrating how referral workflows can be streamlined by efficiently handling varied document structures. Learn how innovative technologies like AI, Natural Language Processing (NLP), and Optical Character Recognition (OCR) are revolutionizing document understanding in healthcare referrals. This is a great opportunity to understand the future of healthcare referrals and how technology can reduce administrative burdens, minimize errors, and enhance patient care. Brought to you by www.infinx.com
The conversation this week is with Avinash Malladhi. Avinash is a PSPO certified professional with over 15 years of experience and implementing finance, artificial intelligence and related IT projects. He holds an MBA in accounting from Wilmington University, and has completed his expertise in finance with Fintech courses from Harvard University. Throughout his career, he has been able to network with other Fintech specialists from over 30 countries and stay informed about the latest developments in AI and Fintech. Currently, he is working at SAP as a consultant on the open text vendor invoice Management Implementation Team will see our solutions include machine learning capabilities to extract value using OCRIf you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!Emerging Technologies NorthAppliedAI MeetupResources and Topics Mentioned in this EpisodePSPO CertificationFintechOptical character recognitionExplainable AIMNIST databaseSAP Technology Consultant Professional CertificateEnjoy!Your host,Justin Grammens
Hardware that my Smart phone has replaced. Alarm clock. Address book. Audio description (no more DVDS). Books (text or audio).. Bar code scanner. Calculator. Camera. Cash identifier. Credit card. Compass. Colour identifier. Computer. Count down timer. Dictionary. Daisy and media player. Find item. Games. GPS. Hearing assistance. Keyboard (Braille). Keyboard (QWERTY). Labeller. Learn Braille. Learn coding. Learn typing. Learn spelling. Large print clock. Light detecter. Magnifier. Mobile phone. Music player. Note taker. Obstacle detection. Obstacle identification. Obstacle distance. Optical Character Recognition (OCR).. Person (face) recognition. Pedometer. Phone mobile or land line (calls, texts, speaker phone, accessibility options etc). Radio. Spirit level. Stop watch. Reminder. Talking/large print Watch. Timer. Torch. TV. Vibrating clock. Video real time assistance. Voice control. Voice dictation. Voice assistant. Voice Recorder. Weather/temperature.Support this Vision Australia Radio program: https://www.visionaustralia.org/donate?src=radio&type=0&_ga=2.182040610.46191917.1644183916-1718358749.1627963141See omnystudio.com/listener for privacy information.
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
When the history of technology of the twentieth century is written, one of the giants will probably be Ray Kurzweil. As most listeners know, he designed the first Optical Character Recognition (OCR) machine. The drudgery and error-inducing process of keying in forms was reduced. Today's interview is with Chirs Harr from Hyperscience. During the interview, he gives listeners an understanding of how OCR has become Intelligent Document Processing. He argues that the founders of Hyperscience produced innovation that combines expanding OCR's ability and have it reducing clerical errors, improving performance, and deliver better customer experience. Not only that but the solution can also be scaled to handle the enormous number of documents. The ability to scale saves taxpayers money. In a recent study conducted last year, there is a report that four agencies process over 800 million documents a year. This number seems high until you think about the size of your tax return last year. Handling a massive number of documents applies to artificial intelligence. It may not have occurred to you that a large part of the information that is poured into machine learning is generated with a paper document. Any effort at increasing the accuracy of that data means the results will improve. Follow John Gilroy on Twitter @RayGilray Follow John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Listen to past episodes of Federal Tech Podcast www.federaltechpodcast.com
Figure and Figure Caption Extraction for Mixed Raster and Vector PDFs: Digitization of Astronomical Literature with OCR Features by J. P. Naiman et al. on Monday 12 September Scientific articles published prior to the "age of digitization" in the late 1990s contain figures which are "trapped" within their scanned pages. While progress to extract figures and their captions has been made, there is currently no robust method for this process. We present a YOLO-based method for use on scanned pages, post-Optical Character Recognition (OCR), which uses both grayscale and OCR-features. When applied to the astrophysics literature holdings of the Astrophysics Data System (ADS), we find F1 scores of 90.9% (92.2%) for figures (figure captions) with the intersection-over-union (IOU) cut-off of 0.9 which is a significant improvement over other state-of-the-art methods. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2209.04460v1
Optical character recognition, or OCR for short, is used to describe algorithms and techniques (both electronic and mechanical) to convert images of text to machine-encoded text. Today on the show, Ahmad Anis shares how he applies Machine Learning to OCR for small hardware applications, for example, blurring a face in a video in real time or on a stream to safeguard privacy using AI. The panel also discusses various strategies related to learning and soft skills needed for success within the industry. In this episode… Optical character recognition (OCR) defined Multiprocessing vs. multithreading I/O bound tasks vs. CPU tasks How to handle a retry in Python Strategies for employing on small hardware Template matching and preprocessing Gray scaling integrations How to learn and get started within the industry Reducing the scope and industry soft skills Sponsors Top End Devs Coaching | Top End Devs Links LinkedIn: Ahmad Anis Twitter: @AhmadMustafaAn1
Understanding document images ( e.g. , invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document . Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of documents; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut , which stands for Do cume n t u nderstanding t ransformer. As the first step in OCR-free VDU research, we propose a simple architecture ( i.e. , Transformer) with a pre-training objective ( i.e., cross-entropy loss). Donut is conceptually simple yet effective. 2021: Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park https://arxiv.org/pdf/2111.15664v2.pdf
Innovation InsightsWith Donny ShimamotoCenter for Accounting TransformationToday's Take-aways:Accountants were the pioneers of business use of database software with the accounting systems—the first wave of accounting technology. The second wave was the move into the cloud, and the third wave now uses AI in accounting technology.AI uses Optical Character Recognition (OCR), and Natural Language Procession (NLP) can be used to read documents like PDFs and Excel and extract the relevant data like lease dates into the database where they can be used for computations or searching.Because you're starting with the source document (e.g. PDF, Excel) and extracting the data from there to then put the data into the system, you can trace data back all the way to the source document all within the same system (not just to the point where someone keyed the data into the system).The goal isn't for the machines (AI) to do everything,, but instead to complete 70-80% of the grunt work and allow the accountant or auditor to focus on the analytics and strategic work.Small and regional firms can gain an advantage by partnering with accounting technology vendors, enabling them to innovate and incorporate technology into their services more quickly than the larger firms.
Bank statements, credit card statements, and tax forms all contain valuable data, but it's trapped on paper and in PDFs. We humans recognize the ink patters them as letters, but they contain no instructions for the computer. Optical Character Recognition (OCR) is how machines learn to read. We explore the mechanics of OCR - the scale of the paper problem in financial services and why paper-based data is so difficult for computers to extract. We look at how accuracy statistics for machines can be misleading and why that results in people - lots of people - staying involved in the digitization process. This week's conversation is a prelude to the next where we'll look at OCR startups and the tremendous business opportunities they're starting to unlock. Check out this week's letter for the full story. Follow @FatTailThoughts on Twitter and your co-hosts @KleeBeard and @StevenDickens3 for more content.
We talk to our very own Cale Teeter about the current state of Blockchain technology and how it is changing the way we deploy these networks in Azure and build applications. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode399.mp3 YouTube: https://youtu.be/_biNB9p5_Fw Resources: Migration announcement to QBS -> https://azure.microsoft.com/en-us/updates/action-required-migrate-your-azure-blockchain-service-data-by-10-september-2021/ QBS Preview -> https://consensys.net/quorum/qbs/ Quickstart for Developers -> https://consensys.net/quorum/developers/ NFT sidechain -> https://docs.palm.io/ Azure SQL Ledger -> https://docs.microsoft.com/en-us/azure/azure-sql/database/ledger-overview Confidential Consortium Framework -> https://microsoft.github.io/CCF/main/overview/concepts.html Other updates: SpaceCows – using AI, space technology and cloud to protect the Top End – Microsoft Australia News Centre Computer Vision Read API for Optical Character Recognition (OCR), part of Cognitive Services, announces its public preview with new languages including Russian, Bulgarian, other Cyrillic and more Latin languages. This release also highlight handwritten OCR support for many languages, along with enhancements for digital PDFs and Machine Readable Zone (MRZ) text in identity documents. Join us on Azure IaaS Day: Learn to increase agility and resiliency of your infrastructure https://azure.microsoft.com/en-us/blog/join-us-on-azure-iaas-day-learn-to-increase-agility-and-resiliency-of-your-infrastructure/ 5 reasons to join the Modernize Apps and Data with Azure and Power Apps free virtual event https://azure.microsoft.com/en-us/blog/5-reasons-to-join-the-modernize-apps-and-data-with-azure-and-power-apps-free-virtual-event/ The Enclave Device Blueprint for confidential computing at the edge https://azure.microsoft.com/en-us/blog/the-enclave-device-blueprint-for-confidential-computing-at-the-edge/ Public preview: Azure Spring Cloud RBAC config server and registry access and Nginx logs and metrics | Azure updates | Microsoft Azure General availability: Azure Spring Cloud application health monitoring and end-to-end TLS/SSL | Azure updates | Microsoft Azure Public preview: Visual Studio Code for the Web | Azure updates | Microsoft Azure
In this podcast, we're hearing from musician and creator of PlayScore 2 Anthony Wilkes. PlayScore 2 is a music scanning app from Organum Ltd. Anthony talks about how he came to develop the app and its exciting features that make it essential for any musician. We also learn about the technology behind PlayScore 2 called Optical Music Recognition (also known as OMR). In short, it is the musical version of Optical Character Recognition (OCR). Anthony talks about the development of OMR, some of its challenges and whether technology has improved or impeded the way we learn music today. Optical Music Recognition on Wikipedia: https://en.wikipedia.org/wiki/Optical_music_recognition. playscore.co Organum Ltd is a UK company based in Oxford specialising in printed and handwritten optical music recognition. Anthony also created the handwritten music recognition engine in the popular NotateMe app, and the PhotoScore application from Neuratron Ltd. As a musician Anthony studied cello with Caroline Bosanquet and Rohan de Saram, and plays in several ensembles. You can also see Anthony's composer's page on the IMSLP free music site. Podcast recorded on 14 October 2021; published 18 October 2021. --- Send in a voice message: https://anchor.fm/talking-classical-podcast/message
I'm about half way through reading ‘My Brilliant Friend”. I say I'm reading it. I'm actually listening to it as an audio book. There are those that say this isn't reading at all and that it's a lesser activity. These people are predominantly people who can see to read a Penguin Classic, which, lets face it, not many of us can do when you consider the meanness of ink and paper involved. Even in the world of sight loss, there are those who share this purist view of reading. They favour the generosity of a braille manuscript, that you need a wheelbarrow to shift. I prefer the audio file. It's all positional. I agree that the voice of a narrator is a very different experience from the translation of a printed page to a picture in your head, but the audio book, that had its origins in making books accessible to those of us who don't read print, is in the ascendancy. You don't have to be blind to appreciate its charms. Everyone is doing it. Back in the day, blind subscribers to the tape library were treated to the tones of the volunteer reader, who delivered the doings of drama in a tone that sometimes strangled the moment. There were even the sounds of the odd doorbell and dogs barking. I haven't tried it myself, but a friend told me that “you haven't lived until you've heard porn read by a volunteer reader”. Ten years ago a publisher I know told me that audio books would never take off because they were associated with blindness and “who wants to get lumped in with a load of blind people.” These days I can download, just about any audio book. You can even get most of the curriculum, in an audio file if you want it. I bet it won't be long before publishers see the potential and switching on the audio or large print function comes as standard on all digital formats. That should be a challenge for Biff and Chip and Kipper. Blindness has been a wonderful midwife: There were probably those who thought that people would never give up the quill in favour of the fountain pen or the typewriter. Both are early examples of technology for blind people. Then there is the thing that we all take for granted whenever we cross border control, or use a bank card, or send a pdf. Optical Character Recognition (OCR) began as a method of scanning and speaking everything from the gas bill to novels for the benefit of blind people. It gave us the very first mouse, (to be caressed not trapped). Now both mouse and character recognition technology are firmly embedded in our everyday life, and aren't we the better for it? I'm not much of a techy but I've embraced it. It gets me from A to B with a talking map, allows me to command my friends to do strange things through text messages that cannot be recalled once sent and keeps me reading. Funny thing is that all these great ideas, designed to help overcome blindness, seem to have got lumped in with a load of people who can see and who have long forgotten just who gave birth to these inventions. What a pity that so few blind people can afford the rising costs of technology that once had them at its heart.
Vispero Presentation: Michelle Williams will discuss Optical Character Recognition, (OCR), with a low-vision device. Sponsored by Vispero
Magnifiers come in all shapes and sizes for all ages and needs. On this podcast, we'll talk to experts about a brand new magnifier on the way, learn about APH's Low Vision Strategy and celebrate Black History Month.Guests (in order of appearance on podcast)Justine Taylor, APH Low Vision Product ManagerMartin Munsen, Kentucky School for the Blind, Outreach DirectorGreg Stilson, APH Head of Global InnovationMr. Louis Tutt, former EOT and special guest Black History Month Links to Awards - Links are directly to the images https://www.aph.org/aph-awards-plaques/https://www.aph.org/wings-of-freedom-award-plaque-2014-2018/https://www.aph.org/wings-of-freedom-award-overview-plaque/https://www.aph.org/from-the-field-awards-plaques/https://www.aph.org/overview-of-william-english-leadership-award-plaque/https://www.aph.org/william-english-leadership-award-plaque-1991-2002/https://www.aph.org/wall-of-tribute/Juno InformationDescriptionThe Juno™ is a portable handheld video magnifier with touchscreen. It includes all the features of a handheld video magnifier with the additional solution of Optical Character Recognition (OCR) capability. The larger screen allows the reader to view documents, while the writing-view accommodates short writing tasks such as filling out a form, signing your name, writing a check, or filling in a worksheet.Screen Size7-inch matte LCD screenMagnification Level2x to 30xButtonsYesColor Modes24 high-contrast color modesTouch ScreenYesSpeechYesStorage4GB of storage, store up to approximately 600 imagesFeatures• Built-in stand• Menus spoken aloud• Time and Date• OCR capability captures and reads aloud lengthy documents to reduce eye fatigue• Full-page OCR• Capture multiple pages and page navigation• Zones – quickly jump to different sections in a book or article• Save files using audio tags• Transfer files from computer and flash drive via USB-C port• Supported files – JPEG, PDF, DocX, and more• Adjustable line and mask feature• Teacher settings – lock out certain features to use Juno during an exam• HD auto-focus camera rotates to support five camera positions: reading, hobby, distance, self-view, and writing-view• Built-in User GuideDimensions8.27 x 5.71 x 1.19 in. – 1.38 lbsContents• Juno handheld video magnifier• Protective carrying case with strap• Quick Start• User Guide• Power adapter• USB-C cable• HDMI mini cable• Two-year warrantyConnectionUSB-CQuota FundsYesWarrantyYesPartner/VendorFreedom Scientific
Empowering the Blind and Visually Impaired to be more independent. That is what Envision is all about and Karthik is in the Blind Abilities Studio to tell us all about the launch of the Envision Glasses. You can Pre-Order your pair today at https://www.letsenvision.com/glasses and shipping will start early Summer! From the web: Turn Text into Speech Envision Glasses enable you to read all kinds of text from any surface in over 60 languages. The SmartGlasses have the fastest and the most accurate Optical Character Recognition (OCR), enabling you to read street signs, handwritten cards and even your favorite book. Know what's around you Envision Glasses can describe scenes, detect colors and scan barcodes. Giving you information about what is in front of you, helping you choose the right dress and assisting you in picking the right products while shopping. Conquer any situation with video calling If you ever feel you need a bit of extra help, you can always call upon your favorite human. Envision Glasses have a dedicated feature for video calling friends and family. You can also choose to share contextual information, such as your location on a map, so they can help you in difficult situations. With the Envision Glasses you will always know your way around. Read more on the web. Contact: You can follow us on Twitter @BlindAbilities On the web at www.BlindAbilities.com Send us an email Get the Free Blind Abilities App on the App Store and Google Play Store. Check out the Blind Abilities Community on Facebook, the Blind Abilities Page, the Career Resources for the Blind and Visually Impaired, the Assistive Technology Community for the Blind and Visually Impaired and the Facebook group That Blind Tech Show.
The future of work, says Marshall Sied, co-founder of intelligent automation consulting firm Ashling Partners, will focus on “removing tasks that humans don't want to be doing anyway.” For him, “it's about taking robots out of human work, not having robots replace human work.” In fact, when clients introduce automation technology like Robotic Process Automation (RPA), Optical Character Recognition (OCR) and machine learning to their organization’s people, processes and current technology landscape, they act “like an x-ray on everything that's inefficient,” Marshall says. “We re-engineer that process with automation and the customer experience in mind upfront. It’s a total shift.” The x-ray metaphor is apt, especially considering that “Ashling” is the phonetic pronunciation of the Gaelic word for “vision.” Marshall came on the podcast to share his views on the automation revolution, Ashling Partners’ place in it, and why we should cancel the concept of “failing fast.”
Ken Leeser is the creator of the app, Recipe IQ; a nutrition calculator that takes the mystery out of healthy cooking! The app uses Optical Character Recognition (OCR) to extract the ingredients from images of recipes... or, cooks can enter their own ingredients! Users have the option to scan in recipes, add links to recipes, or manually type in ingredients themselves, and the app does the rest! Website: https://www.recipeiqapp.com/ FaceBook: https://www.facebook.com/recipeIQ IG: https://www.instagram.com/recipeiq/
Today's Quick Start comes from Creating Wealth Episode 882, originally published in September 2017. The month of October brings the 2017 tax season to a close. Jason and his panel showcase the organizational tools and portfolio management options available through the newly upgraded Property Tracker 2.0. The Property Tracker software makes creating Schedule E’s, depreciation schedules and other tax forms easy as well as a robust calendar which keeps track of all of your income properties important dates. Key Takeaways: [02:25] Was the media bummed out that Hurricane Irma didn't cause more damage? [09:04] The Waffle House Index and Cryptocurrencies. [16:44] Fundamentals for organizing and tracking investment properties. [24:00] Property Tracker produces tax schedules including Schedule E and an up-to-date Schedule of Depreciation. [27:58] Property Tracker 2.0 has updated proformas with RV ratio evaluations. [30:24] Clarifying the statement "Every day you have a property you own and don't sell, you are buying it back from yourself." [35:05] Organizational tips for using Property Tracker. [39:28] Scanning with Optical Character Recognition (OCR) makes you more efficient. [46:38] Details about the new Amazon Echo contest, the Meet the Master's event and the upcoming Venture Alliance Mastermind. [50:20] Free one-hour Property Tracker onboarding session with Zack for Jason Hartman clients. Mentioned in This Episode: Jason Hartman Property Tracker Software Administrator@realestatetools.com Meet the Master’s of Income Property Tickets Venture Alliance Mastermind Donate to Hurricane Relief Here
that Blind Tech Show Special: Seeing AI Developers Anirudh and Saqib Talk App Infancy and Money Recognition on the way! With the huge response from the Blindness community the Seeing AI app available in the App Store has spread like wild fire and people are excited about the possibilities coming from the Microsoft Accessibility teams. Bryan Fischler, host of the That Blind Tech Show and Jeff Thompson from Blind Abilities have a conversation with Anirudh Koul and Saqib Shaikh, two developers from the Microsoft Accessibility team working on the Seeing AI app. You will hear about the Hackathon where the seed was planted and how the team uses the users feedback to determine the changes and improvements that have been coming fast and steady. The Seeing AI app is a project and uses artificial intelligence is some of the featured channels. Short Text channel is like taking a glance at your mail. The built in Optical Character Recognition (OCR) picks up the text through the camera and begins reading instantly. The Document Channel does more traditional OCR work and has audio indicators to help assist centering the page content. The Product Channel has an audible signal to assist in location the bar code and the signal speeds up when closing in on the bar code with the iPhone’s camera. The picture is taken automatically and the database is searched and the data is read to the user. Instructions as well as ingredients are also read if available. The Person Channel allows the user to take pictures of individuals and tag them as the facial recognition feature will know and say that person’s name when using the camera and glancing around the area. This is where AI comes in. The Seeing AI app will also describe the person and guess the age. The Scene Beta Channel is a feature that will describe the photo taken such as a bench in a park, or a person walking a dog. Photos can be taken, or imported from the camera roll to have the app describe the image. They say this app in in it’s infancy and there is a lot more to come. One feature coming to the Seeing AI app that was disclosed is a Money Identifier. Yes, this is a Swiss Army Knife of an App. You can follow the Microsoft Accessibility team on Twitter @MSFTEnable The Seeing AI appis only available from the App Store. Thank you for listening! Follow That Blind Tech Show on Twitter @BlindTechShow Send That Blind Tech Show an email That Blind Tech Show is produced in part with Blind Abilities Network. You can follow us on Twitter @BlindAbilities On the web at www.BlindAbilities.com Send us an email Get the Free Blind Abilities Appon the App Store.
The month of October brings the 2017 tax season to a close. Jason brings Fernando and Zack on the show to showcase the organizational tools and portfolio management options available through the newly upgraded Property Tracker 2.0. The Property Tracker software makes creating Schedule E's, depreciation schedules and other tax forms easy as well as a robust calendar which keeps track of all of your income properties important dates. Key Takeaways: [1:14] Fundamentals for organizing and tracking investment properties. [8:30] Property Tracker produces tax schedules including Schedule E and an up-to-date Schedule of Depreciation. [12:28] Property Tracker 2.0 has updated proformas with RV ratio evaluations. [14:54] Clarifying the statement “Every day you have a property you own and don't sell, you are buying it back from yourself.” [19:35] Organizational tips for using Property Tracker. [23:58] Scanning with Optical Character Recognition (OCR) makes you more efficient. [31:08] Details about the new Amazon Echo contest, the Meet the Master's event and the upcoming Venture Alliance Mastermind. [34:50] Free one-hour Property Tracker onboarding session with Zack for Jason Hartman clients. Website: www.PropertyTracker.com
The month of October brings the 2017 tax season to a close. Jason brings Fernando and Zack on the show to showcase the organizational tools and portfolio management options available through the newly upgraded Property Tracker 2.0. The Property Tracker software makes creating Schedule E's, depreciation schedules and other tax forms easy as well as a robust calendar which keeps track of all of your income properties important dates. Key Takeaways: [1:14] Fundamentals for organizing and tracking investment properties. [8:30] Property Tracker produces tax schedules including Schedule E and an up-to-date Schedule of Depreciation. [12:28] Property Tracker 2.0 has updated proformas with RV ratio evaluations. [14:54] Clarifying the statement “Every day you have a property you own and don't sell, you are buying it back from yourself.” [19:35] Organizational tips for using Property Tracker. [23:58] Scanning with Optical Character Recognition (OCR) makes you more efficient. [31:08] Details about the new Amazon Echo contest, the Meet the Master's event and the upcoming Venture Alliance Mastermind. [34:50] Free one-hour Property Tracker onboarding session with Zack for Jason Hartman clients. Website: www.PropertyTracker.com
The month of October brings the 2017 tax season to a close. Jason brings Fernando and Zack on the show to showcase the organizational tools and portfolio management options available through the newly upgraded Property Tracker 2.0. The Property Tracker software makes creating Schedule E's, depreciation schedules and other tax forms easy as well as a robust calendar which keeps track of all of your income properties important dates. You can now get tickets to the Meet the Master's of Income Property event and the Venture Alliance Mastermind is coming up in October. Key Takeaways: [02:25] Was the media bummed out that Hurricane Irma didn't cause more damage? [09:04] The Waffle House Index and Cryptocurrencies. [16:44] Fundamentals for organizing and tracking investment properties. [24:00] Property Tracker produces tax schedules including Schedule E and an up-to-date Schedule of Depreciation. [27:58] Property Tracker 2.0 has updated proformas with RV ratio evaluations. [30:24] Clarifying the statement "Every day you have a property you own and don't sell, you are buying it back from yourself." [35:05] Organizational tips for using Property Tracker. [39:28] Scanning with Optical Character Recognition (OCR) makes you more efficient. [46:38] Details about the new Amazon Echo contest, the Meet the Master's event and the upcoming Venture Alliance Mastermind. [50:20] Free one-hour Property Tracker onboarding session with Zack for Jason Hartman clients. Mentioned in This Episode: Jason Hartman Property Tracker Software Administrator@realestatetools.com Enter the Amazon Echo Contest Meet the Master's of Income Property Tickets Venture Alliance Mastermind Donate to Hurricane Relief Here
TBTS Special: Seeing AI Developers Anirudh and Saqib Talk App Infancy and Money Recognition on the way! With the huge response from the Blindness community the Seeing AI app available in the App Store has spread like wild fire and people are excited about the possibilities coming from the Microsoft Accessibility teams. Bryan Fischler, host of the That Blind Tech Show and Jeff Thompson from Blind Abilities have a conversation with Anirudh Koul and Saqib Shaikh, two developers from the Microsoft Accessibility team working on the Seeing AI app. You will hear about the Hackathon where the seed was planted and how the team uses the users feedback to determine the changes and improvements that have been coming fast and steady. The Seeing AI app is a project and uses artificial intelligence is some of the featured channels. Short Text channel is like taking a glance at your mail. The built in Optical Character Recognition (OCR) picks up the text through the camera and begins reading instantly. The Document Channel does more traditional OCR work and has audio indicators to help assist centering the page content. The Product Channel has an audible signal to assist in location the bar code and the signal speeds up when closing in on the bar code with the iPhone’s camera. The picture is taken automatically and the database is searched and the data is read to the user. Instructions as well as ingredients are also read if available. The Person Channel allows the user to take pictures of individuals and tag them as the facial recognition feature will know and say that person’s name when using the camera and glancing around the area. This is where AI comes in. The Seeing AI app will also describe the person and guess the age. The Scene Beta Channel is a feature that will describe the photo taken such as a bench in a park, or a person walking a dog. Photos can be taken, or imported from the camera roll to have the app describe the image. They say this app in in it’s infancy and there is a lot more to come. One feature coming to the Seeing AI app that was disclosed is a Money Identifier. Yes, this is a Swiss Army Knife of an App. You can follow the Microsoft Accessibility team on Twitter @MSFTEnable The Seeing AI app is only available from the App Store. Thank you for listening. Follow That Blind Tech Show on Twitter @BlindTechShow Send That Blind Tech Show an email You can follow us on Twitter @BlindAbilities On the web at www.BlindAbilities.com Send us an email Get the Free Blind Abilities App on the App Store.
The Genealogy Gems Podcast with Lisa Louise Cooke - Your Family History Show
Have you ever felt like you got the short end of the genealogy stick when it comes to family heirlooms? Maybe you haven't inherited much in the way of family photos or memorabilia, or maybe you feel like you've tapped out all the potential goodies that are out there to find. In this episode I'll share an email I got from Helen, because she reminds us that you should never say never. I've also got another amazing story about an adoption reunion. And we'll also check in with our Genealogy Gems Book Club Guru Sunny Morton about this quarter's featured book, The Lost Ancestor by Nathan Dylan Goodwin. And of course all kinds of other genealogy news and tips for you. We're going to take all that genealogy and technology noise out there and distill it down into the best of the best, the genealogy gems that you can use. I'm just back from several weeks on the road. Since we last got together in episode 178 I've been to Cape Cod to talk to the Cape Cod Genealogical Society about Time Travel with Google Earth, and all you Genealogy Gems Premium Members have that video class and handout available to you as part of your Premium membership – and if you're not a member click Premium in the main menu at genealogygems.com to learn more about that. And then Bill and I headed to Providence, RI where I was the keynote at the NERGC conference. That was my first time ever to New England so it was a real treat. And we teamed up once again with the Photo Detective and Family Chartmasters and held our free Outside the Box mini genealogy sessions in our booth which were very popular. Then I had a 2 day turnaround and Lacey and I were off to Anchorage Alaska to put on an all-day seminar at the Anchorage Genealogical Society. Another great group of genealogists! And Lacey and I added an extra couple of days to explore, and explore we did. We booked a half day ATV tour to explore the National forest outside Anchorage. Now this was before the start of tourist season, so there we are, to gals driving out of town, onto a dirt road and waiting at the meeting spot in the middle of nowhere where we met Bob the Guide. He looked like he was straight out of Duck Dynasty! He showed us how to drive the ATVs, assured us that the bears weren't quite out yet, and then packing his side arm pistol lead us out into the wilderness for 4 ½ hours of amazing scenery. It was like we had the entire forest to ourselves. This guide would pull over every once and while, whip out a telescopic lens on a tripod and in seconds would zero in on something way over on the mountain across the valley, and he'd say “look in there. See that clump of snow with legs, that's a Mountain Goat, or that's a Dall Sheep.” It was incredible. We saw moose, and muskrat, the biggest rabbit's I've ever seen in my entire life, which Bob the Guide called bunnies, and he was right, the only thing we never saw was bear. But that was just fine with me and Lacey! So after our mountain safari we flew home and I gave an all-day seminar in my own backyard in Denton, TX, and then Bill and I jumped in the suburban and drove to St. Charles Missouri where I spoke at the National Genealogical Society Conference. St. Charles is just on the other side of the river from St. Louis, and we were pleasantly surprised to find the a quaint little main street. Diahan Southard Your DNA Guide here at Genealogy Gems was with us and Diahan and I drug poor Bill in and out of every “foo foo potpourri” shop they had when we weren't busy meeting so many of you at the booth or in class. It was a 4 day conference, which is A LOT of genealogy, but we had a blast and again teamed up with Family Chartmasters, The Photo Detective and Family Tree Magazine for an Outside the Box extravaganza of free sessions in the booth. And this time Diahan Southard joined in with sessions on Genetic Genealogy. And all this reminds me of an email I received recently from Shelly. She writes: “I am a new listener and new premium member of Genealogy Gems. Thanks for getting me motivated to organize my research and get back into learning my family history. I had never thought about attending a genealogy conference before but listening to your podcasts has gotten me interested in going. There is a conference coming up in less than two weeks only 1 1/2 hours from me in St. Charles, Mo. I can't afford to attend the actual conference, but would it be worth it to just go to the free exhibit space? I listened to one of your podcasts that mentioned you and a few others give free mini classes. Please let me know what you think. Thanks, Shelly” I told Shelly that I thought it would absolutely be worth it. In fact, that is one of our goals with our free Outside the Box sessions in our booth - to give everyone a free opportunity to experience a genealogy conference. The hall is very large, there will be loads of exhibitors, and you not only attend any and all of our sessions, but at most larger conferences you'll usually also find companies like Ancestry, MyHeritage and FamilySearch holding sessions at their booths. Well Shelly took my advice and she wrote back. She says: “Thanks for your encouragement to attend the NGS exhibitor area! I was able to attend on Friday and enjoyed looking at all the booths and talking to some of the exhibitors. I was also able to attend a few Outside the Box sessions also, although yours were too crowded to see or hear very well! Thanks so much for doing this. While waiting for a free session to start in another area, I overheard two men talking about DNA for genealogical purposes and privacy. My ears perked up as they discussed an instance where a DNA sample sent to Ancestry.com was used to help solve a crime committed by a relative of the DNA tester. I don't have enough information to form any opinions on that case, but the question of privacy came up when I was asked my mother to take a DNA test for me. The first thing she said was that it sounded interesting but she was worried whether the government or the police could get ahold of the information. I encouraged her to read the privacy information on the site and to let me know, but I told her I didn't see how anyone could get the information. Her curiosity got the better of her, as I knew it would, and she agreed to the testing and I am awaiting the results. The funny thing is that my mother does have a criminal history and has served over ten years in prison (I was raised by my father from age 5). Hopefully there aren't any serious unsolved crimes my mom has been involved in! She is 64 now so hopefully the statute of limitations has passed for most crimes. I will let you know if the FBI come knocking on my door :)” I want to say thank you to all of you listening who stopped by the booth and welcome to all our new listeners who got to know us at these recent conferences and seminars, we are very glad you are here! Recent Family Tree Magazine Evernote Webinar: In the last year I've moved from Earthquake central (California) to Tornado Alley (Texas) and it's been a bit of an adjustment to say the least. 2 weeks ago while I was presenting a webinar on using Evernote for genealogy for Family Tree Magazine when my husband silently placed a note in front of me. It said that we were under tornado watch and if it got any worse he was hauling me off the computer and into the storm shelter! I hung in there, and thankfully it blew over and we finished the webinar. Genealogy wins again! (And yes, the video of the webinar is coming soon to Premium Membership.) Then last night we spent about an hour in our shelter room while our county got pummeled with torrential rain, non-stop lightening, and yes, even a few tornadoes touched down. Our devoted dogs Howie and Kota instinctively blocked the doorway to the shelter in an effort to keep us safe. They did a good job, and all is well! All this threat of danger and destruction has reinforced my decision to bring into our Genealogy Gems family a brand new sponsor. Backblaze is now the official back up of Lisa Louise Cooke's Genealogy Gems. If you've been to the RootsTech conference then you may already be familiar with them. Backblaze is a trusted online cloud backup service that truly makes backing up all your most precious computer files super easy. The thought of losing my genealogy files is too much to bear. Now I can concentrate on keeping my loved ones safe through the storms of life because I know Backblaze is taking care of my files and photos! Many of you have asked me which company I use to back up my files. I've done my homework and Backblaze is my choice. I invite you to visit and get all your files backed up once and for all. “Dear Lisa, Thanks for the latest email. I have been using Backblaze for a year now. I thankfully have not needed their complete services :-), but I love the feeling of being protected. Have a great weekend! It was so nice to meet you at Roostech in February. Thanks, Ellen” Tyler Moss, the dean of Family Tree University wrote me after a recent webinar I gave for them: “One woman typed an ellipsis (…) in to the chat box. I messaged her back and said “I'm sorry, did you mean to send a question? All I see are three periods.” And she said, “Oh no, I'm just in wonder at all the awesome things I can now do in Evernote!” The webinar we were doing was called “Enhance Your Genealogy with Evernote” and in that session which we recorded on to video as well I covered 10 terrific genealogy projects you can use Evernote for to improve your research, organization and productivity. My motto these days is, save time by being more efficient so you have more time to spend with your ancestors, and that's what this training session was all about. And the good news for all of you who are Genealogy Gems Premium Members is that the video and downloadable handout are coming very soon to the Premium Videos section of genealogygems.com. Look for the announcement of its release in our weekly free newsletter. You can sign up for the free Genealogy Gems weekly e-newsletter on our homepage. GEM: Evernote Library Project Create an Evernote Genealogy Book Library: Create a new notebook called “Library” With your smart phone or tablet, snap photos of the cover of each of your genealogical books Send the photos to the Library notebook in Evernote (on your mobile device tap the share icon and tap Evernote. You will need to have authorized the Evernote app.) Another option is to email them to your unique Evernote email address which will also place them in Evernote. Evernote will apply Optical Character Recognition (OCR) to each image making them keyword searchable. To see if you already have a book, tap the notebook and then search an applicable keyword. Inspiration and motivation from Helen: A recent email from listener Helen reminds us to search our basements and attics for unique and amazing family history finds. There's no substitute for being able to tell family members' stories through their own words and photographs. “I just had to tell you about my recent find. My late father-in-law served in the Canadian Navy for 39 years, entering Naval College when he was only 14. Most of my knowledge about his life came from talking with him before he died. Of course, then I did not know the questions to ask. “About a month ago, I was preparing for a lecture on his life for a local World War 1 Seminar. I starting looking around in our basement as I knew we had some material from when we cleared out his house when he died, but I had no idea of just what exciting material I would find. “I found his personal diaries, with the earliest from 1916! The journals give an amazing first-person record of naval service from a person who devoted his life to the service of his country. I was able to weave his actual words into the somewhat dry official record of his long time service [ending with] his being presented with a Commander of the British Empire medal shortly before his retirement. “I am so grateful that the family saved these invaluable documents through the myriad of moves that a naval officer's career entails. In a different box, I found his photographs from the same era—some even earlier than the journals. I am now seriously considering publishing the journals along with the photographs, as they deserved to be shared.” Genealogy Gems Premium members can to access Premium podcast episode 116 to hear a discussion between two authors of books on life-story writing, and to access a Premium podcast AND video on how to make a family history video Her Birth Mom Was Her Co-Worker! Birth Family Reunion A woman recently went searching for her birth mom after receiving a copy of her adoption records (these in her home state of Ohio). She didn't have to search very far: just in a different department at her workplace. “When [La-Sonya] Mitchell-Clark first received her birth records in the mail on Monday and saw the name Francine Simmons, she immediately plugged it into Facebook,” reports the story on . It didn't take long for her to recognize her mother as a woman who worked at the same business she did. “Following a tearful reunion, the two…discovered that they live just six minutes away from one another,” reports the article. La-Sonya also learned that she has three birth sisters, one of whom also works at the same company. Wow! Company picnics and water cooler chats must suddenly seem a lot more meaningful after this birth family reunion. Learn to use your own DNA to search for genetic relatives (whether you're adopted or not!) in our with CeCe Moore, a leading expert who appears regularly on television shows to talk about finding family with DNA. Genealogy Gems Book Club Our featured book for the 2nd quarter of 2015 is Sunny's Book Recommendations: by Nathan Dylan Goodwin by Nathan Dylan Goodwin by Stephen Molyneux by Yaron Reshef Jimmy Fox's Nick Herald Genealogical Mystery series: , and Nathan Dylan Goodwin does have two other titles in the same series. I've read them both. Hiding the Past takes us into a genealogical mystery set in World War II and it's a similar type of read as The Lost Ancestor. I enjoyed it. The Orange Lilies is a novella set at Christmastime. Here Morton puts his skills to work—and his emotions—to confront the story of his own origins and a family story from the Western Front in World War I a century ago. It's a more personal story and Nathan I think is pushing into newer territory as a writer in dealing with more intimate emotion. But I like seeing Morton have these experiences. I also have a few more titles to recommend along these lines. It's that “If you liked this book, we think you'll also like…” The Marriage Certificate by Stephen Molyneux. This is a novel. I opened to the first page and the About the Author made me laugh: Stephen, amateur genealogist, lives in Hampshire and the South of France with two metal detectors and a long-suffering wife.” The book opens with a scenario many of us may be sympathetic with. A genealogy buff buys a marriage certificate he sees on display at an antiques gallery. He begins researching the couple with an idea of returning the certificate to them. Eventually he uncovers several secrets, one with some money attached to it, but others are also chasing this money. It may sound a bit far-fetched but it doesn't unfold that way. I like the surprise twists that bring the story into the present day. I also liked living out a little fantasy of own through Peter, the main character: that of being that genealogical research hero who brings something valuable from the pasts to living relatives today. Another book I recently enjoyed is Out of the Shoebox: An Autobiographical Mystery by Yaron Reshef. This one's a more serious, and I think a little more sophisticated, read. In this memoir (so a true story), Yaron gets a phone call about a piece of property his father purchased in Israel years ago. He and his sister can inherit it, but only if they can prove that man was their father. He goes on an international paper chase into the era of World War II, the Holocaust and the making of Israel. Then a forgotten bank account surfaces. There's more, of course, in Yaron's two-year quest to understand the tragedies of his family's past and recover some of its treasures. There's another series I've been made aware of but haven't read yet. This is Jimmy Fox's Nick Herald Genealogical Mystery series: Deadly Pedigree, Jackpot Blood and Lineage and Lies. The hero is an American genealogist who lives and works in New Orleans, of course one of the most colorful and historical parts of the U.S. I'll put links to all of these on our Genealogy Gems Book Club webpage, which you can find at .
Tek Talk welcomes a representative from Envision AI . Envision, an award-winning assistive technology company, has announced the official launch of Envision Glasses, which are AI-powered smart glasses for blind and visually impaired people. Envisions software uses artificial intelligence (AI) to extract information from images and then speaks the images out loud so the user has a greater understanding of the environment around them. Blind and low-vision users can see to read documents at home or work, view labels while shopping, easily recognize their friends, find personal belongings at home, use public transport, video call anyone in real time, and more. Envision is developed for and together with the visually impaired community. The app is simple, gets things done and brings the best assistive experience to blind and low vision users. Simply use your phone camera to scan any piece of text, your surroundings, objects, people or products and everything will be read out to you thanks to Envision's smart artificial intelligence (AI) and Optical Character Recognition (OCR). Now, they have taken things one big step further and put everything into glasses. Website: https://www.letsenvision.com/envision-glasses Contact information: Karthik Mahadevan, Co-founder / Chief Designer and Karthik Kannan, Co-founder / Chief Engineer. Email: karthik@LetsEnvision.com