Set of subroutine definitions, protocols, and tools for building software and applications
POPULARITY
API, acronimo di Application Programming Interface, sono le interfacce software che permettono di far comunicare tra di loro applicazioni e servizi diversi. Al giorno d'oggi con software che sono sempre più verticali e specifici, si parla sempre più spesso di integrazione tra piattaforme tramite API. Nel caso dell'hotellerie il PMS è il centro della “stella” essendo il gestionale hotel il detentore della maggior parte delle informazioni relativamente ospiti, prenotazioni, tariffe e prezzi.Alcuni esempi pratici dell'utilizzo API che permettono la connessione dal PMS ad altri servizi sono:sistemi di domoticaservizi di reputazioneRMS (Revenue Management System)
Today's episode showcases a really important advancement in the provision of technology to the insurance industry. Back in the early days of computerisation we installed different systems in our businesses to do different jobs. We didn't expect them to talk to each other. And because each was made of proprietary tech – they almost certainly didn't. Nowadays we want systems to ingest data sources and the output of other systems and we expect these new outputs to be easily ingestible further down the chain by our own suppliers and all the different departments of our business as well as regulators and other essential third parties. Common standards and modern connectivity are allowing this to happen. I suppose this is the difference between computerisation and digitisation. But this begs a change in the way technology providers go about their business. Either they will try to be all things to all people and produce their best attempts at solving every problem and answering every question that the insurance industry is trying to answer, or they will have to be more realistic and pragmatic. In a world brimming with excellent providers of increasingly specialist solutions, many are realising that the best outcomes for customers are going to come about when they are provided with a wide array of choice and ease of operation. A customer should be able to run different tools on the same platform without having to become an expert in plugging them all together themselves. Tech providers should be doing this for them and making their lives easy. There should also be the possibility of choosing between different competing options for certain tasks. This strategy is called an ecosystem approach and it is what we are going to be getting deep into today. To help us I spoke to Ian Summers Global Business Leader of AdvantageGo and Jeff Cohen who is a Senior Vice President at Zywave. Ian and Jeff are vastly experienced in their fields. Ian is the architect of the AdvantageGo ecosystem idea and Zywave is a core member of this affiliation. You absolutely don't have to be a techie to find this discussion useful and Ian and Jeff are really approachable and good at explaining things to laypeople like me. Half an hour with these two and I guarantee you'll be excited about the possibilities that a genuinely digitised insurance world are going to open up for all of us, not least in a brave new AI-enabled world. NOTES AND ABBREVIATIONS: We mentioned ISO, which is the Insurance Services Office. API is Application Programming Interface. ECF is the Electronic Claims File used in the London Market. LINKS: Learn more about AdvantageGo's ecosystem here: https://www.advantagego.com/ecosystem/ Expect more additions as time progresses.
La storia di Marco Palladino, Co-Founder e CTO di Kong è davvero incredibile. Kong è un azienda tech basata a San Francisco che 2 anni fa ha chiuso un round da $100 milioni guidato da Tiger Global portando la loro valutazione a $1.4 miliardi diventando un Unicorno. Kong vende una tecnologia per gestire API, Application Programming Interface, che sono la colonna portante del digitale. Permettono di gestire lo scambio di dati collegando le aziende con i loro fornitori, partner e clienti. La missione di Kong è di alimentare queste connessioni digitali in modo sicuro. Tra gli investitori in Kong ci sono niente di meno che Travis Kalanick, il founder di Uber, Jeff Bezos, il founder di Amazon, e il CEO di Google Eric Schmidt. La storia di Marco e del suo co-founder Augusto Marietti è stata scritta per essere raccontata in un podcast - e vi garantiamo che vi sbalordirà! Marco e Augusto hanno iniziato a lavorare al predecessore di Kong, Mashape dal loro garage a Milano quando avevano solo 18 anni. Ma in Italia non hanno trovato nessuno che credesse nella loro idea e gli investitori offrivano condizioni che non potevano accettare. Invece di abbandonare il sogno hanno preso un biglietto di sola andata per San Francisco. Farcela in Silicon Valley è stato tutt'altro che banale: hanno dormito sulle panchine, perso 20kg mangiando solo solo pasta al tonno per mesi e messo tutte le loro energie nel crescere l'azienda fino alla svolta che arriva nel 2017. SOCIAL MEDIA Se vi piace il podcast, il modo migliore per dircelo o per darci un feedback (e quello che ci aiuta di più a farlo diffondere) è semplicemente lasciare una recensione a 5 stelle o un commento su Spotify o l'app di Apple Podcast. Ci ha aiuta davvero tantissimo, quindi non esitate :) Se volete farci delle domande o seguirci, potete farlo qui: Instagram @madeit.podcast LinkedIn @madeitpodcast RINGRAZIAMENTI Vogliamo ringraziare BAIA, la Business Association Italy America, per la loro partnership. BAIA è un'associazione non profit che opera nella San Francisco Bay Area dal 2006 e ci sta aiutando a promuovere questa serie con i loro membri. BAIA è gestita da un gruppo di professionisti italiani a San Francisco che crea opportunità di networking professionale all'interno della comunità italiana e italo-americana, facilitando lo scambio aperto di conoscenze tra l'Italia e gli Stati Uniti attraverso eventi per manager e imprenditori in Silicon Valley. Più informazioni sul sito https://www.baia-network.org/ e se siete in Silicon Valley vi consigliamo anche di iscrivervi alla loro newsletter.
What is an API? Where did the term come from? What does an API do, and why are they important for developers? See omnystudio.com/listener for privacy information.
One of the parts when it comes to building decentralised applications is the infrastructure that is required. This allows DApps to query the blockchain, get data and information that it requires and display it on a web interface. Furthermore, it allows the DApp to create transactions and interactions with the blockchain. Without this infrastructure, it makes it harder and costly to interact with the blockchain.Maestro comes into play allowing DApp developers to quickly and easily interact with the blockchain via a Application Programming Interface or API. APIs are common in the web development space and makes for an easy and understandable way for developers to get and push data to and from the blockchain.Maestro has successfully launched its new series of features to allow for developers to build better and more engaging DApps.You can find out more about the V1 release at: https://www.gomaestro.org/blog/platform-v1-launchChapters00:00 Intro02:02 How were people querying on-chain data before? 03:04 Scope and features Maestro provides 05:15 Event notifications 06:55 Turbo transactions 08:31 Costs for using Maestro 09:59 Getting started with Maestro 11:17 Future plans 11:46 DeFi Analytics 13:48 Library of Open Source Contracts with Anastasia Labs Partnership 16:55 Will these smart contracts be audited? 18:04 Integration with Optim Bonds 20:37 Decentralisation of Maestro 22:48 Building on Maestro 25:28 Final comments Catalyst Proposalshttps://cardano.ideascale.com/c/idea/106445https://cardano.ideascale.com/c/idea/105135https://cardano.ideascale.com/c/idea/105183https://cardano.ideascale.com/c/idea/104382https://cardano.ideascale.com/c/idea/102983https://cardano.ideascale.com/c/idea/102990
Foundations of Amateur Radio During the week, prompted by a protest on popular social media site Reddit, I rediscovered that there are other places to spend time. It sounds absurd now, but until then much of my social interaction with the world was via a single online presence. This didn't happen overnight. Over the years more and more of my time was spent on Reddit engaging with other humans around topics of my interest, amateur radio being one of them. As you might know, I'm the host of a weekly net, F-troop. It's an on-air radio discussion for new and returning amateurs that's been running since 2011 and you can join in every Saturday for an hour at midnight UTC. In addition to the net, there's an online component. It captures items of interest shared during the on-air conversation. It's intended to stop the need to read out web addresses on-air, create a historic record of the things we talk about and allow people who are not yet amateurs to explore the kinds of things that capture our interest. Since 2014, F-troop online was a website that I maintained. After the announced demise of the service in 2020 I explored dozens of alternatives and landed on the idea to move to Reddit, which happened in March of 2021. At the time of selecting Reddit as the successor to the website, I wanted to create a space where anyone could add content and discuss it, rather than rely on a single individual, me, to update the website every time something was mentioned. During the net these days you'll often hear me ask a person to post that on Reddit. This to illustrate, at a small scale, how the F-troop community shares its knowledge with each other and the wider community. With the realisation that there are other places to spend time, comes an uneasy feeling about how we build our online communities, and how resilient they really are. Before the Internet our amateur radio community talked on-air, or in person at club meetings, or shared their interests in a magazine, or wrote letters. Today we congregate online in many different communities. If one of those fails or loses favour, finding those people elsewhere can be challenging, especially if those communities prefer anonymity. For quite some time now I have been thinking about how to build a radio amateur specific online community. The issues to surface, address and overcome are wide and varied. I created a list ... hands up if you're surprised ... I will point out that I'm sure it's incomplete, your additions and comments are welcome. Funding is the first item to consider. All of this costs time and money. Amateurs are notorious for their deep pockets and short arms, but they're no different from much of humanity. If this community needs to endure, it needs to be financially sustainable from the outset. Authentication and Identity is the next priority. If it's for amateurs, how do you verify and enforce that and what happens if an amateur decides not to renew their callsign, do they stop being an amateur? Should this community be anonymous or not? Moderation and Content is next on the list. What types of content are "permitted"? What is the process to regulate and enforce it? Is this forum public and accessible via a search engine, or private? Can people who are not yet amateurs benefit from the community and use it to learn? How do you set rules of conduct and how do you update them? How do you deal with rule infractions and how do you scale that? Who is this for? Is it decentralised across each callsign prefix, across a DXCC entity, or based on some other selection criteria? Can you have more than one account, or only one per person, or one per callsign? What about machine accounts, like a local beacon, repeater, solar battery, radio link, propagation skimmer or other equipment? What about bots and APIs? If that doesn't mean anything, a bot, short for robot, is a piece of software that can do things, like mark content as being Not Safe For Work, or NSFW, or it could enforce rules, or look-up callsigns, or share the latest propagation forecast or check for duplicates, scale an image, convert Morse code, check for malicious links, or anything you might want in an online community. The way a program like a bot, or a mobile client, or a screen reader, or a desktop application talks to the community is using an API, or an Application Programming Interface. Incidentally, the protest at Reddit is about starting to charge for access to the API, something which will immediately affect software developers and eventually the entire Reddit community, even if many don't yet realise this. What about system backups and availability? How seriously are we taking this community? Is there going to be a Service Level Agreement, or are we going to run it on a best-effort basis? How long is it acceptable for your community to be inaccessible? What about content archiving and ageing? Do we keep everything forever, do we have an archive policy? What happens if a topic that's permitted one year isn't permitted a year later? And those are just to start the discussion. There are plenty of options for places to start building another community, but will they last more than a couple of years, or be subject to the same effects that a Coronal Mass Ejection causes on HF propagation, being wildly random and immensely disruptive? At the moment I'm exploring an email list as a place to store our F-troop data and I intend to discuss archiving it in the Digital Library of Amateur Radio and Communications. Where is your online community and how resilient is it really? I'm Onno VK6FLAB
In the aftermath of the devastating earthquake in Turkey and Syria, thousands of volunteer software developers have been using a crucial Twitter tool to comb the platform for calls for help — including from people trapped in collapsed buildings — and connect people with rescue organizations. They could soon lose access unless they pay Twitter a monthly fee of at least $100 — prohibitive for many volunteers and nonprofits on shoestring budgets. “That's not just for rescue efforts which unfortunately we're coming to the end of, but for logistics planning too as people go to Twitter to broadcast their needs,” said Sedat Kapanoglu, the founder of Eksi Sozluk, Turkey's most popular social platform, who has been advising some of the volunteers in their efforts. Nonprofits, researchers and others need the tool, known as the API, or Application Programming Interface, to analyze Twitter data because the sheer amount of information makes it impossible for a human to go through by hand. Kapanoglu says hundreds of “good Samaritans” have been giving out their own, premium paid API access keys (Twitter already offered a paid version with more features) for use in the rescue efforts. But he says this isn't “sustainable or the right way” to do this. It might even be against Twitter's rules. The loss of free API access means an added challenge for the thousands of developers in Turkey and beyond who are working around the clock to harness Twitter's unique, open ecosystem for disaster relief. “For Turkish coders working with Twitter API for disaster monitoring purposes, this is particularly worrying — and I'd imagine it is similarly worrying for others around the world that are using Twitter data to monitor emergencies and politically contested events,” said Akin Unver, a professor of international relations at Ozyegin University in Istanbul. The new fees are just the latest complication for programmers, academics and others trying to use the API — and they say communicating with anyone at the company has become essentially impossible since Elon Musk took over. This article was provided by The Associated Press.
On this episode of the Robotic Process Automation Podcast, Trellis is joined by CEO, Founder and Lead Advisor of Digital Workforce Solution, Matt Gustitus to talk about the technical aspects of process automation, with a focus on the difference between Robotic Process Automation (RPA) and Application Programming Interface (API).https://www.digitalworkforcesolution.com/
"I don't know how you're missing the value." - Michael Dunworth BWJ 056: I chat with entrepreneur and Bitcoin maxi, Michael Dunworth from Wyre. They provide an Application Programming Interface for financial services like exchanges. In this episode you will learn about: 4:28 Moving to Silicon Valley 12:48 Exiting a business after 8 years 15:17 Highlights & challenges of entrepreneurship 21:35 Crash-course on infrastructure for exchanges 29:22 Bitcoin is immutable 31:40 Our evolving consciousness 38:40 Value is in the eye of the beholden 46:23 "We're all godparents and Satoshi's dead" 50:39 The story of the 50mil sats reward for block 7,140,000 Relevant links: @michaeldunwort1 on Twitter sendwyre.com This episode was sponsored by: FastBitcoins.com Please support this free show: Donate to our Crowd-fund on Geyser How else can you help? Rate Subscribe Share Get in touch: Personal Twitter: @jakeeswoodhouse Podcast Twitter: @bitcoinwithjake Thank you so much for taking the time and effort to support the show! Best, Jake
“Don't be afraid of long copy; be afraid of insufficient clarity,” Flint McGlaughlin taught in Website Wireframes: 8 psychological elements that impact marketing conversion rates (https://meclabs.com/course/sessions/website-wireframes/).I thought of that quote when my latest guest talked about how often being afraid drives marketing and business leaders, and so they resort to focus groups and consultants to cover for themselves in case something goes wrong. But she called this the dangerous delusion of safety, that playing it safe can hurt you more than you know. And she shared a story from her career illustrating that lesson.You can hear that lesson, and many more lesson-filled stories, from Jasmin Guthmann, Head of Corporate Communication, Contentstack (https://www.contentstack.com/), on this episode of the How I Made It In Marketing podcast.Guthmann manages a global team of eight people that she has built from scratch over the past six months, along with a $1 million budget (her part of the organization's overall marketing budget). Contentstack has raised $89 million in funding over three rounds.Guthmann is also Vice President of MACH Alliance (https://machalliance.org/). MACH stands for Microservices based, API-first, Cloud-native SaaS and Headless. The not-for-profit industry body has 70 member companies, ranging in size from startups to Google.Listen to my conversation with Guthmann using this embedded player or click through to your preferred audio streaming service using the links below it.Stories (with lessons) about what she made in marketingSome lessons from Guthmann that emerged in our discussion:Be the person who doesn't give up when things get complicated. Be the one to simplify them. Then, your creative teams can do phenomenal things.The Dangerous Delusion Of Safety: Playing it safe can hurt you more than you know.There's no way you can think your way into predicting what your customers need. Seriously. Stop trying. “If you want it, don't be afraid to ask for it.”“Large meetings are not for decisions” and “Be the calm in the storm.”“Don't be afraid to ask for better from your people. But make sure you walk the talk!”Related content mentioned in this episodeCustomer-First Marketing: A conversation with Wharton, MarketingSherpa, and MECLABS Institute (https://sherpablog.marketingsherpa.com/consumer-marketing/wharton-interview-customer-first-marketing/)Using the Science of Habit Formation in Customer-First Marketing (interview with Charles Duhigg) (https://sherpablog.marketingsherpa.com/marketing/charles-duhigg-interview-part-two/)About this podcastThis podcast is not about marketing – it is about the marketer. It draws its inspiration from the Flint McGlaughlin quote, “The key to transformative marketing is a transformed marketer” from the Become a Marketer-Philosopher: Create and optimize high-converting webpages (https://meclabs.com/course/) free digital marketing course.Apply to be a guestIf you would like to apply to be a guest on How I Made It In Marketing, here is the podcast guest application – https://www.marketingsherpa.com/page/podcast-guest-application
This article was published in the MarketingSherpa email newsletter.“Attention precedes interest. Interest precedes engagement; engagement proceeds relationship,” Flint McGlaughlin teaches in Headline Writing: 4 principles that could drive down your website bounce rate (https://meclabs.com/course/sessions/headline-writing/).When I asked my latest guest how she builds those relationships with her marketing, she talked fondly of learning directly from real people, whether in her early days in a trade show booth or more recently using digital technology. The end goal of listening to customers is the same – to help her organization and team of marketers understand customers as real people, which will ultimately help them build more relationships with customers.You can learn the story behind that lesson from Michelle Huff, Chief Marketing Officer, UserTesting (https://www.usertesting.com/), along with many more lesson-filled stories from her career, on the latest episode of the How I Made It In Marketing podcast.Huff manages a team of 65 at UserTesting. UserTesting is a publicy traded company on the New York Stock Exchange. It reported $147.4 million in revenue in 2021, up 44% year-over-year.Stories (with lessons) about what she made in marketingObstacles are opportunities, if you choose to view them as opportunities.Utilize customer empathy when trying to involve the customer in marketing efforts.Surround yourself with the right people.Marketers should get involved with the sales team to learn from them.To learn about how your product really works, teach someone else.Become a true partner with your Sales counterpart.Related content mentioned in this episodeEmpathy Marketing: 3 examples of empathetic marketing in action (with results) (https://www.marketingsherpa.com/article/case-study/empathy-marketing)Informed Dissent: The best marketing campaigns come from the best ideas (https://sherpablog.marketingsherpa.com/online-marketing/marketing-dissent-campaigns/)Healthcare Marketing Leadership: Build communities…not a customer list, walk your own path, take care of yourself (podcast episode #30) (https://www.marketingsherpa.com/article/interview/healthcare-marketing-leadership)About this podcastThis podcast is not about marketing – it is about the marketer. It draws its inspiration from the Flint McGlaughlin quote, “The key to transformative marketing is a transformed marketer” from the Become a Marketer-Philosopher: Create and optimize high-converting webpages (https://meclabs.com/course/) free digital marketing course.Get more episodesTo receive future episodes of How I Made It In Marketing, sign up to the MarketingSherpa email newsletter at https://www.marketingsherpa.com/newsletters
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.11.01.514681v1?rss=1 Authors: Cleeve, P., Dierickx, D., Buckley, G., Gorelick, S., Naegele, L., Burne, L., Whisstock, J. C., de Marco, A. Abstract: Automation in microscopy is the key to success in long and complex experiments. Most microscopy manufacturers provide Application Programming Interfaces (API) to enable communication between a user-defined program and the hardware. Although APIs effectively allow the development of complex routines involving hardware control, the developers need to build the applications from basic commands. Here we present a Software Development Kit (SDK) for easy control of Focussed Ion Beam Scanning Electron Microscopes (FIB/SEM) microscopes. The SDK, which we named OpenFIBSEM consists of a suite of building blocks for easy control that simplify the development of complex automated workflows. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Congress has hundreds of members, thousands of employees, and a collection of monumental buildings. Now it has an application programming interface, or API. For more about APIs and how this one will help spread information about the legislative branch, Federal Drive host Tom Temin spoke with product owner Andrew Weber and the director of IT design and development, Jim Karamanis, both from the Library of Congress.
In this episode of Executive with a Cause, host Tammy Ven Dange chats with Debbie Saunders, founder and CEO of Wildlife Drones. Pesky parrots inspired Dr Debbie Saunders to establish this innovative approach to animal tracking, which has evolved from a research project to a social enterprise. In this episode, we chat about her financing journey, deciding between an NFP or business model approach, data opportunities and expanding into the US market. To read the full episode notes and related links, visit: https://roundboxconsulting.com.au/podcast/ Thank you for listening to the Executive with a Cause podcast. Don't forget to follow us on Apple Podcasts, Spotify or wherever you listen to podcasts. IT in Plain English This week's segment answers the question, ‘What is an API or an Application Programming Interface?” If you find that your legacy systems are having difficulties sharing data, understanding this technical term may help you with investment decisions. Sign-up here to subscribe to the “IT in Plain English” newsletter. You can submit your question to Tammy Ven Dange by messaging her on LinkedIn, and maybe she'll answer it on the show. Links & Resources Connect with Debbie on Linkedin Visit Wildlife Drones' website and read their case studies Follow Wildlife Drones on Facebook, Linkedin and YouTube Other Episodes Mentioned in this Show Watch our interview with Jim Lynch of Zealandia Credits
Application Programming Interface wie kann man es sich herleiten und direkt verstehen? ...
Application Programming Interface, or API, was initially built to tackle the historic IT issue of system integrations. The software enables two applications and their individual data siloes to communicate with ease, leaving companies to focus on matters that do not allow for automation. However, this isn't the only tech problem that API, when managed correctly, can be used to solve. Martin Buhr, CEO and Founder of Tyk is joining Scott Taylor, Principal Consultant at MetaMeta Consulting, to explore how API management can drive digital transformation and continuity. Martin lends his expertise on making data 'talk' and delves into how GraphQL and Tyk's Universal Data Graph is changing the game. He also discusses the differences and benefits of best-in-class solutions versus home-grown solutions, before explaining the importance of digital transformation projects through use cases and customer stories.
Axel Kloth Axel is a physicist by training, and he is used to the need for large-scale compute. Axel, discovered over 30 years ago that scalability of processor performance was paramount for solving any computational problem. That necessitated a new paradigm in computer architecture. At Parimics, SSRLabs and Axiado he was able to show that new thinking was needed, and what novel practical solutions could look like. He is now repeating that approach with Abacus Semi. On today Show we Talk about: What is an Application Programming Interface, an API for short? Why do we need API's? Are there legal issues on use and modification of APIs? If you are in school right now, studying Computer Science, what would you learn and why? Where will industry have the most problems in the years to come? What would be an ideal API? What are the Pros and Cons of API's? Connect Axel https://www.linkedin.com/in/axelkloth/ http://www.abacus-semi.com/ CONNECT WITH SHAWN https://linktr.ee/ShawnflynnSV Shawn Flynn's LinkedInAccount Silicon Valley LinkedInGroup Account Shawn Flynn's FacebookAccount Email Shawn@thesiliconvalleypodcast.com
API é um conjunto de rotinas e padrões de programação para acesso a um aplicativo de software ou plataforma baseado na Web. A sigla API refere-se ao termo em inglês "Application Programming Interface" que significa em tradução para o português "Interface de Programação em Aplicações". O Google Maps é um dos grandes exemplos na área de APIs. Por meio de seu código original, muitos outros sites e aplicações utilizam os dados do Google Maps adaptando-o da melhor forma a fim de utilizar esse serviço. Mas o que as pessoas de produtos precisam saber sobre esse assunto? Isso é um assunto só para o time tech e não para as pessoas de produto? Existe uma maneira de simplificar a explicação de como funciona uma API? Perfil do Vagner: https://bit.ly/3BNRw5n
Would this project have been done without the 21st Century Cures Act?FTAAnthem, Humana and SS&C Technologies have entered into a joint venture named DomaniRx to develop a claims adjudication and pharmacy benefits manager cloud platform, according to documents filed by SS&C Technologies with the Securities and Exchange Commission.PBMs receive and store massive amounts of data in their function of claims adjudication. The use of an Application Programming Interface, or API, allows applications to talk to each other for a faster, seamless process.In December 2020, CMS proposed the Interoperability and Patient Access final rule, which became final on May 1. It is aimed at removing silos of information that prevent patient data exchange and promotes interoperability throughout the healthcare system.Starting July 1, payers were required to give patients access to their data through "open" APIs, subject to the provision of HIPAA. Medicaid, CHIP and individual market Qualified Health Plans payers needed to build, implement and maintain APIs that enabled provider access to their patients' data. They were also required to streamline the prior authorization process.AHIP pushed back against the rule earlier this year, saying it was hastily constructed, requiring insurers to build these technologies without the necessary instructions.-----I don't imagine it would have, but I'm curious what you think.#healthcare #healthIT #cio #cmio #chime #himsshttps://www.healthcarefinancenews.com/news/anthem-and-humana-enter-multi-million-agreement-ssc-api-cloud-platform
Today's HousingWire Daily episode features a crossover episode of HousingWire's Housing News podcast. In this episode, HousingWire Editor in Chief Sarah Wheeler interviews ChazzHuston, the strategic alliances manager at Optimal Blue.During the episode, Huston and Wheeler discuss where the mortgage industry currently stands in the digital revolution.
In this conversation, we chat with Chris Dean, who is the Founder & CEO at Treasury Prime. Previously, Chris was the CTO & VP of Engineering at Standard Treasury, which was acquired by Silicon Valley Bank for an undisclosed amount. More specifically, we discuss all things banking-as-a-service, FinTech APIs, embedded finance, and the general evolution of the FinTech banking industry over the last decade.
It's "In the News..." the only LIVE diabetes newscast! Our top stories this week: Oral meds to prevent T1D move ahead, racial disparity in peds CGM use, what that Dexcom API news means, a new study with teens and Control IQ and a summer olympian talks about her recent T1D diagnosis. Join Stacey live on Facebook each Wednesday at 4:30pm EDT! Check out Stacey's book: The World's Worst Diabetes Mom! Join the Diabetes Connections Facebook Group! Sign up for our newsletter here ----- Use this link to get one free download and one free month of Audible, available to Diabetes Connections listeners! ----- Get the App and listen to Diabetes Connections wherever you go! Click here for iPhone Click here for Android Episode transcript below: Hello and welcome to Diabetes Connections In the News! I'm Stacey Simms and these are the top diabetes stories and headlines of the past seven days. As always, I'm going to link up my sources in the Facebook comments – where we are live – and in the show notes at d-c dot com when this airs as a podcast.. so you can read more when you have the time. XX Our top story.. A new oral drug to prevent type 1 is moving along in trials. Right now it's called IMT-002 – and put very simply - it's meant to block a genetic trait that increases the risk for the disease and is seen in a majority of patients. It's a new way to think about treating type 1 – phase two studies could start next year. It's thought that this could help with other auto-immune diseases as well.. the next condition these researchers want to tackle is celiac. https://www.biospace.com/article/releases/im-therapeutics-reports-positive-results-in-safety-tolerability-and-mechanism-of-action-of-phase-1b-trial-of-lead-drug-imt-002-in-type-1-diabetes/ XX Could the global rise in diabetes have an environmental component? In an Advances in Pharmacology article, researchers say routine exposure to chemicals that disrupt our endocrine systems play a role in triggering diabetes. These researchers say "We often attribute patient's disease risk to individual choices, and we don't necessarily think about how systems and environments play into disease risk," They go on to say so-called lifestyle factors like exercise and diet fail to fully account for "the dramatic rise and spread" of diabetes. https://www.ehn.org/environmental-factors-of-diabetes-2653768475/how-endocrine-disruptors-contribute-to-diabetes XX A new study shows Black children less likely to start or continue with a CGM after a type 1 diabetes diagnosis. These researchers at Children's Hospital of Philadelphia or CHOP as it's commonly known, show that a racial-ethnic disparity in CGM use begins within the first of year after diagnosis. White children were more than two and a half times more likely to start CGM compared with Black children and twice as likely to start CGM compared with Hispanic children. There was a disparity even when broken down by types of insurance – commercial or government. These researchers say social determinants including structural racism, are likely playing a role in disparities in care and outcomes https://www.healio.com/news/endocrinology/20210719/black-children-less-likely-to-start-continue-cgm-after-type-1-diabetes-diagnosis XX Very large survey of women shows that half of those with type 1 or type 2 diabetes are not getting pre-pregnancy counseling. This study included more than 100-thousand women. Right now guidelines from many groups including the CDC and American Diabetes Association recommend providers offer women with diabetes health counseling before pregnancy to cut down on the increased maternal and infant risk associated with both conditions. These researchers hope to develop better tools for women & their doctors. https://publichealth.berkeley.edu/news-media/research-highlights/women-with-diabetes-and-hypertension-dont-receive-pre-pregnancy-counseling/ XX Big increase for time in range when kids use hybrid closed loop systems. We've heard about a lot of improvement, but in this study, the percentage of kids and teens with t1d spending at least 70% time in range… more than doubled after 3 months of using Tandem's Control IQ system. This was a study of about 200 kids, median age was 14, and it was a real world study – where the kids went about their lives, not in a clinical setting, and the researchers pulled the data electronically. Interestingly, sleep mode use increased through 6 months, while the exercise mode was used less over time. Kids with an A1C over 9 saw the most improvement. Those with an A1C under 7 didn't see much of a change. https://www.healio.com/news/endocrinology/20210714/more-youths-with-type-1-diabetes-meet-timeinrange-goal-with-hybrid-closedloop-system XX Dexcom gets FDA clearance for real time APIs.. what does that mean? Third party companies like Fitbit or Sugarmate which have long integrated Dexcom data have been doing so on a bit of a delay. Now they can do so in real time. API stands for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other. Dexcom's Partner Web APIs will allow users to view all of their diabetes care data in one place to enable in-the-moment feedback and adjustments, the company said in the announcement. https://www.mobihealthnews.com/news/dexcom-gets-fda-nod-its-new-api-integration XX Cool new exhibit at Banting House – recent guests of the podcast and museum celebrating the birthplace of insulin. They're set to open up again this week – the first time since March 2020 – and there's a new computer generated exhibit. It does work outside.. In the square where Dr. Banting's statue stands. Giving visitors a virtual glimpse at the life and work of the man credited for the discovery of insulin. If you haven't visited – it's in Canada – or seen the museum, I highly recommend a spin around the website, we'll link it up. XX Summer Olympics are kicking off and by now you've probably heard that American trampoline gymnast Charlotte Drury was just recently diagnosed. She found out she had type 1 weeks before the 2021 Olympic qualifying trials, she revealed on Instagram last week. she and her coach pressed on and she basically got back into things within three weeks. She posted this photo of herself wearing the Dexcom. Drury is the first American woman to win a gold medal in trampoline at a World Cup That's Diabetes Connections – In the News. If you like it, share it. And feel free to send me your news tips. Stacey @ diabetes dash connections dot com. Please join me wherever you get podcasts for our next episode -Tuesday – I'll share my conversation with Gold Medal Olympian Gary Hall Jr – when he was diagnosed in 1999 he was told to give up swimming. He didn't and he talks about why.. and how he overcame what was conventional wisdom for athletes at the time. This week's show is the story of Jack Tierney, diagnosed in 1959 with type 1 he's 81 and he says he's never felt better. Thanks and I'll see you soon
Entrevista com Leonardo Santos CEO na Semantix, falamos sobre do potencial do mercado de Application Programming Interface e as possibilidade de criar um ecossistema integrando BIG Data, Inteligência Artificial e as APIs. Entre no grupo Papo Cloud Makers no Telegram Roteiro do episódio em: papo.cloud/107 -------------------------------------------- Instagram / Twitter: @papocloud E-mail: contato@papo.cloud -------------------------------------------- Ficha técnica Produção: Vinicius Perrott Edição: Senhor A - editorsenhor-a.com.br Support the show: https://www.picpay.com/convite?@L7R7XH See omnystudio.com/listener for privacy information.
Kong è un azienda tech basata a San Francisco che vende una tecnologia per gestire API, Application Programming Interface, che sono la colonna portante del digitale. Permettono di gestire lo scambio di dati collegando le aziende con i loro fornitori, partner e clienti. Tra gli investitori in Kong ci sono niente di meno che Travis Kalanick, il founder di Uber, Jeff Bezos, il founder di Amazon, e il CEO di Google Eric Schmidt. Qualche mese fa ha chiuso un round da $100 milioni guidato da Tiger Global portando la loro valutazione a $1.4 miliardi diventando un Unicorno.La storia di Marco e del suo co-founder Augusto Marietti è stata scritta per essere raccontata in un podcast - e vi garantiamo che vi sbalordirà! Marco e Augusto hanno iniziato a lavorare al predecessore di Kong, Mashape dal loro garage a Milano quando avevano solo 18 anni. Ma in Italia non hanno trovato nessuno che credesse nella loro idea e gli investitori offrivano condizioni che non potevano accettare. Invece di abbandonare il sogno hanno preso un biglietto di sola andata per San Francisco. Farcela in SIlicon Valley è stato tutt'altro che banale: hanno dormito sulle panchine, perso 20kg mangiando solo pasta al tonno per mesi e messo tutte le loro energie nel crescere l'azienda fino alla svolta che arriva nel 2017. --Questo episodio è stato sponsorizzato da Young Platform, una piattaforma innovativa che permette a tutti di formarsi e poi semplicemente iniziare a comprare e vendere criptovalute in modo facile e sicuro. L'app è pensata e realizzata per chiunque voglia iniziare un percorso nel mercato crypto ma non sa da dove iniziare, quindi semplicissima e immediata nell'utilizzo. Io e Camilla abbiamo iniziato a formarci con la loro app Stepdrop e siamo pronte a lanciarci su Young Platform. La notizia bomba è che otterrete 10€ di bonus sull'app già alla registrazione usando il link che abbiamo incluso nei dettagli della puntata o usando il codice MADEIT durante l'iscrizione. Praticamente Young vi consentirà di avere subito un budget con cui partire per acquistare criptovalute. Per attivare il bonus, dovrete iscrivervi alla piattaforma e fare la verifica dell'identità e poi fare un deposito minimo di 50€. Inoltre, all'interno dell'app potete guadagnare 5€ per ogni amico che invitate! https://youngexchange.page.link/Dw1j
API, or Application Programming Interface, is standardized technology that allows different applications to "talk" to each other. In today's episode we try and de-mystify the API and share real-world scenarios and situations on how an API can make integrating data for your business easy and seamless, including how the QuickBooks API can make everything from payment processing to financial task management faster and more efficient.***IsAware is powered by InterSoft Associates, who believes the more you know about your IT the better. Visit us at intersoftassociates.com and schedule your free consultation to talk about how custom software can help your business.
My guest today is my friend, Joel Edgerton, the COO of BitFlyer USA. BitFlyer is a Bitcoin exchange and marketplace that enables its customers to buy, sell, and spend bitcoins. The site offers an Application Programming Interface which allows their clients to access and control their accounts, using custom-written software. In addition, it offers businesses e-commerce payment services that can transfer Bitcoin inorder to provide users with secure trading by all possible measures of security as a comprehensive platform of Bitcoin. Prior to joining bitFlyer, Joel spent his career in traditional finance, most recently he worked for BNP Paribas Cardif as their COO, where he was leading the charge to pivot the company to a digital insurance company. He built a closed-loop customer experience program, supported by data-oriented system architecture, and data-driven process optimization. In our conversation, we discuss the differences in regulation between the west and Japan, the general sentiment towards crypto by the Japanese people, and how crypto is being implemented within Japanese society. We also discuss why the crypto industry needs clear and explicit regulation, advice for TradFi people looking to make the jump to crypto, is Satoshi Japanese, and much more. Please enjoy my conversation with Joel Edgerton. --- Bitcasino From May 10 to May 22, 2021 (GMT), you could win $500 worth of Bitcoin prizes from a $10,000 Bitcoin prize pool, all you have to do is tweet the story behind your Bitcoin “Pizza” and include the unique hashtag #pizzaio and tag @BitcoinPizza_io Check it out at https://untoldstories.link/bitcasino --- Kava Kava is a leading DeFi platform trusted by Binance, Huobi, Kraken, Okex and Chainlink. Learn more about them at https://untoldstories.link/kava -- Shopping.io Shopping.io is the leader in mainstream Crypto E-commerce solutions. This is the first time Crypto E-commerce has ever gone international, building the bridge behind Crypto and retail. My listeners get an additional 2% off on top of all loyalty program discounts, so be sure to check them out at https://untoldstories.link/shopping.io --- This podcast is powered by BlockWorks Group. For exclusive content and events that provide insights into the crypto and blockchain space, visit them at https://blockworks.co
My guest today is my friend, Joel Edgerton, the COO of BitFlyer USA. BitFlyer is a Bitcoin exchange and marketplace that enables its customers to buy, sell, and spend bitcoins. The site offers an Application Programming Interface which allows their clients to access and control their accounts, using custom-written software. In addition, it offers businesses e-commerce payment services that can transfer Bitcoin inorder to provide users with secure trading by all possible measures of security as a comprehensive platform of Bitcoin. Prior to joining bitFlyer, Joel spent his career in traditional finance, most recently he worked for BNP Paribas Cardif as their COO, where he was leading the charge to pivot the company to a digital insurance company. He built a closed-loop customer experience program, supported by data-oriented system architecture, and data-driven process optimization. In our conversation, we discuss the differences in regulation between the west and Japan, the general sentiment towards crypto by the Japanese people, and how crypto is being implemented within Japanese society. We also discuss why the crypto industry needs clear and explicit regulation, advice for TradFi people looking to make the jump to crypto, is Satoshi Japanese, and much more. Please enjoy my conversation with Joel Edgerton. --- Bitcasino From May 10 to May 22, 2021 (GMT), you could win $500 worth of Bitcoin prizes from a $10,000 Bitcoin prize pool, all you have to do is tweet the story behind your Bitcoin “Pizza” and include the unique hashtag #pizzaio and tag @BitcoinPizza_io Check it out at https://untoldstories.link/bitcasino --- Kava Kava is a leading DeFi platform trusted by Binance, Huobi, Kraken, Okex and Chainlink. Learn more about them at https://untoldstories.link/kava -- Shopping.io Shopping.io is the leader in mainstream Crypto E-commerce solutions. This is the first time Crypto E-commerce has ever gone international, building the bridge behind Crypto and retail. My listeners get an additional 2% off on top of all loyalty program discounts, so be sure to check them out at https://untoldstories.link/shopping.io --- This podcast is powered by BlockWorks Group. For exclusive content and events that provide insights into the crypto and blockchain space, visit them at https://blockworks.co
Our topic today is APIs or Application Programming Interface. Everyone is talking about APIs, but it seems that few of us really understand them. THIS is about to change! --- Support this podcast: https://podcasters.spotify.com/pod/show/steven-batiste/support
El significado de API es Application Programming Interface. Es una interfaz que define las interacciones entre múltiples aplicaciones de software. Puedes utilizar un API gratuitamente o pagando para acceder a la información de una App. En este episodio te vamos a dar ejemplos de cómo se hacen negocios con las APIs. Los API son los que hacen posible que tu puedas conectar tu CRM como Mailchimp con las formas de tu sitio web para capturar la información de un cliente potencial y poder darle seguimiento automático para conseguir una venta. Un API da la posibilidad de crear soluciones increíbles a problemas de la vida diaria. Tu iPhone o smartphone está conectado a distintas APIs de diferentes empresas para darte información que necesitas acceder. Tu iPhone te da la temperatura a la que estás en tu ciudad y puedes acceder a esta información todos los días. El proveedor de esta información es Yahoo Weather una empresa propiedad de Yahoo. Yahoo Weather tiene un API en la que permite que empresas como Apple se integren con su software para que puedas tener el pronóstico del tiempo y el clima en tu software. Google Maps es un gran ejemplo de un negocio de API. Si eres una app de servicio a domicilio como Rappi tienes que tener un Mapa en tu App. Google Maps le cobra a Rappi (si lo quisieran usar) una mensualidad por el uso de su mapa o Rappi tendría que desarrollar su propia app de Mapas. Desarrollar eso sería muy costoso y por eso las APIs son muy buen negocio. He entrevistado a varios fundadores de Startups y Software como Servicio que utilizan conexiones con API para resolver problemas de una industria en específico. Los ejemplos son Nufi.mx, LeadSales.io, FacturAPI.com, Emissary.mx, Refly.me y Lead Layer Vamos a Platicar. https://softwarecomoservicio.com/?p=2215
API son las siglas de Application Programming Interface o interfaz de programación de aplicaciones, que es una estructura intermedia entre el usuario o dispositivo requiriente y el sistema que aloja el servicio o información necesaria. Así, una API facilita la interacción entre las partes debido a que fija previamente qué tipo de comunicaciones podrán establecerse … ¿Cómo una API web permite integrar diversos servicios fácilmente? Leer más » The post ¿Cómo una API web permite integrar diversos servicios fácilmente? appeared first on Codster.
Broker Brett breaks down his journey from researching APIs and writing Blog posts to being a "Broke Angel" Investor. API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other. Each time you use an app like Facebook, send an instant message, or check the weather on your phone, you're using an API.Broker Brett Radio is proudly powered by Newport Beach Insurance Center, please reach out for Personal, Commercial, or Life Insurance quotes. info@NPBIC.com
API es el término inglés “Application Programming Interface”, o sea, “Interfaz de Programación de Aplicaciones” y hoy es común hablar de ellas. En este episodio te cuento las ventajas y porqué quizás deberías montarte una para tu negocio o desarrollar una. Veremos como se comunican varias aplicaciones o servicios a través de lo que se llama interfaz, exponiendo la forma de acceder y su disponibilidad. También hablaremos de las API REST, (Representational State Transfer) como para abarcar un poquito mas. Y por supuesto el repaso a lo que significa ser Backend o Frontend y el buscado FullStack. Espero te guste y puedas compartir tus comentarios y sugerencias. Además te invito a compartirlo en tus redes, con tus amigos o con aquellos que se inician en el mundo IT. Muy buen código para todos y hasta la próxima. Encontrá más contenido en nuestras redes: - https://www.facebook.com/codigotecno/ - https://www.instagram.com/codigotecno O envíame un email en: codigotecno (arroba) hotmail.com o en Telegram @soleralejandro Si te inspiró deja tu comentario, like o sugerencia en las redes de podcast mas populares: * En Ivoox : https://bit.ly/2JoLotl * En Spotify : https://spoti.fi/31Dp4Sq * En Itunes: https://apple.co/2WNKWHV * En Youtube: https://bit.ly/2JLaKRj * A través de Player.fm: https://player.fm/series/codigotecno * Y en Google Podcast : https://www.shorturl.at/FKP17 https://www.alejandrosoler.com.ar
Les API, pour Application Programming Interface, ont totalement bouleversé la manière d'écrire un service Web. Aujourd'hui, elles sont quasiment incontournables, d'une part parce qu'elles simplifient le développement d'applications, et d'autre part, parce qu'elles établissent un contrat documenté avec leurs consommateurs. Créées aux début des années 2000, elles se sont popularisées par la suite, jusqu'à voir leur nombre exploser avec l'arrivée des micro services.Mais à mesure que nous développons des API, nous nous rendons vite compte qu'elles ont toutes des points communs, comme l'authentification, le throttling ou le rate limiting. Réécrire ce tronc commun serait non seulement long et pénible, mais aussi il offrirait une expérience utilisateur différente pour chaque API. C'est à ce moment qu'entre en jeu les gestionnaires d'API, tel que Kong, qui est sans doute le plus populaire à ce jour.Dans cet épisode, j'ai le plaisir de recevoir Thibault Charbonnier. Thibault est principal engineer pour Kong Inc., et il n'est pas seulement le mainteneur de Kong, mais il a également participer à sa création ! Avec lui, je reviens donc sur Kong, de ses origines à aujourd'hui, sur son fonctionnement et sur ses cas d'usages.Support the show (https://www.patreon.com/electromonkeys)
API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other. As a #Marketers we often confuse about the definition of #API because we think it is very technical and we need coding language understanding to understand this. but it is not true, however, to build one or to connect one you need to have programming language knowledge but it is not necessary to using and understanding it. That is why today we are going to explain it to you with a very simple example which you see every day in your life and that is your experience in a restaurant. Hopefully, this example will be with you for your life :) PS - This example is one of the examples we teach in our classroom
Hey Everyone, Thank you for tuning in and you are listening to Prashant where I will talk about dotnet and related Technologies. Today I have taken up a very interesting topic called WEB API. We will try to understand what is web API? And why it's been brought in to the web development? As per Wikipedia API is the Application Programming Interface for either a web server or web browser. Before we jump into what is web API let's try to understand how the web applications used to work before web API. Initial days of web application development both the presentation layer and server-side code has to have in the same project. Such tight coupling of server-side and client-side code did not allow developers to work separately. Until and unless the work of BE did not complete, the work of FE could not be started. This tight coupling also restricted the use of multiple tech stacks into the product. Deployment of such applications became pain since it was not possible to deploy presentation and server code into a different machine. They had to have the same machine. Which resulted in less utilization of server resources and increased the cost of deployment. The major problem with such an approach was after a certain time adding new and removing Any feature from the product became very difficult. It was hard to understand the system for new developers. All such delays and costs resulted in a huge revenue decline for software companies. Then the concept of WEB API evolved. It promised to bring clear separation into the product code. If someone wants to develop server-side code in dotnet core then for the presentations layer the code can be developed in any language. It provided flexibility in choosing the tech stack and also It can be reused in the different presentation layer. This means once the product functionality is written it can be consumed by any presentation layer. WEB API also solved the problem of deployment where both presentation and server code could be deployed in the different cloud servers. Web API also allowed developers of both UI and BE to work independently. Now next question comes how WEB API able to solve this problem? Server-side code will expose endpoint which could be consumed by the presentation layer using HTTP or SOAP Endpoints are nothing but a URI using which any interface can consume data or post data to the server. WEB API brought a drastic change in the way web applications used to look before. I hope I have covered most of the things in the basics of WEB API. Stay tuned will bring some more interesting topics for you. Till then stay happy and keep sharing the knowledge. --- Send in a voice message: https://anchor.fm/prashantmaurya/message
Matt Wright from RubiconRed explains, "What is an API or Application Programming Interface, as a way to help an organisation digitally improve itself. Matt also talks through "Why APIs Matter", using a few examples. In Part 2, Matt talk's about how Organisation are able to get started on their own API Journey with 5 Stages, From Business Strategy through to technology Platform.
Matt Wright from RubiconRed explains, "What is an API or Application Programming Interface, as a way to help an organisation digitally improve itself. Matt also talks through "Why APIs Matter", using a few examples. In Part 2, Matt talk's about how Organisation are able to get started on their own API Journey with 5 Stages, From Business Strategy through to technology Platform.
In this exciting podcast we get to hear about how blockchain is used to track the coronavirus. We had the pleasure of having Jim Nasr, CEO of Acoer, developers of the blockchain coronavirus tracker, talk to us about the challenges the healthcare industry faces and opportunities for innovating it. What is blockchain? Jim looks at blockchain in terms of three pillars: Technology Token economics Distributed computing The first pillar is with technology of distributed ledgers themselves. The second pillar is with regards to the value creation attribution specifically to public blockchains. This is where blockchain provides a reward in a public setting in a transparent manner for the creation of value. In the Bitcoin network for example miners are rewarded with 12.5 Bitcoins for completing a mathematical challenge for which they would have used electricity and processing power. The third pillar is regarding distributed architecture and distributed computer. For Jim this is fundamentally a question of culture of distributing power instead of a few central figures or central servers and removing intermediaries. The Coronavirus Tracker On the 3rd of February Acoer announced the launched of its coronavirus tracker. Acoer is a software development company narrowly focused on modern, open and interoperable healthcare software. Acoer has created a data visualization tool to track the deadly coronavirus. The tool, known as the HashLog data visualization engine, interacts in real-time with Hedera Hashgraph’s distributed ledger technology. This allows researchers, scientists and journalists to understand the spread of the coronavirus and its trends over time through visuals presented on Acoer’s HashLog dashboard. In creating this tool Jim had looked at existing trackers, particularly the Johns Hopkins one and felt they could add a more global perspective on it, use the existing visualisation engine called HashLog to make it their tracker more usable, dynamic and filterable. Jim has been a huge believer that when it comes to public health data surveillance, blockchain can be a source of truth and a source of accountability. If you tuned the token economics and the game theory correctly, you can incentivize good data collection and you can disincentivize bad players from gaming the system. Acoer are constantly growing their data sources. Today they have clinical data from the CDC and other sources that show relevant clinical trials that are happening for treating coronavirus. Google Trends and social media provide in context data. Why blockchain? It was pointed out to Jim that as the Acoer coronavirus tracker the Johns Hopkins one is using the same data as theirs from the CDC and the WHO but it does it with APIs, Application Programming Interface, and not with a blockchain. Jim points out that their tracker also uses APIs. APIs are the modern way for different systems that are unaware of each other to communicate with each other with standard protocols. For example the clinical trails data, Acoer gets is through APIs from https://clinicaltrials.gov/. For Jim blockchain is supplementary to APIs. The blockchain ingests all this data from all kinds of sources via APIs. The blockchain can confirm whether or not the integrity of this data has been changed at any stage. Acoer can thus confirm that it hasn’t manipulated the CDC data for example as they can provide a real time audit trail of the data on a public Hedera DLT. The reasons for using a public blockchain is to provide clear references to the data provenance. Why Hedera Hashgraph? Having experimented with numerous public blockchain, Jim wanted a near real time responsiveness for the coronavirus tracker, and Hedera’s consensus algorithm is mathematically proven to be the most optimal. In addition Hedera’s high number of transactions per seconds and the finale of transaction within a few seconds was very imp...
Application Programming Interface: communication protocol or interface used to interact between different pieces of software. In networking, you could have an API available to interact between a Wireless Controller and a script coded by the network admin. APIs make it very easy to interact with our networking equipment to configure, operate and monitor them. In […] The post CTS 208: Introduction to APIs appeared first on Clear To Send.
One of the most substantial challenges in system development is the application programming interface. The result is (ideally) a stable and well-defined interface for your system. That can be a struggle when your solution is not exactly stable nor well-defined. Nevertheless, that does not excuse you from ignoring the API until implementation. It needs to be addressed in the architecture from the start. A Basic Application Programming Interface When you know an API will be required, you can always take the simple route. Review the core data items, and features then create a pass-through of sorts. While this is not the most elegant solution (and not recommended), it is a start. When you start with this approach, try to keep the endpoints minimal, and you can grow those properly as the system evolves and matures. Set The Stage The best map for your API is going to come from the user stories and requirements. There are two primary reasons for an API. You are either providing integration for external systems or an import/export feature. That boils down to a modest API that has a few ways to extract or send in data in the latter case. In the former case, you will be providing endpoints to match the user stories. Therefore, your API will either be driven by your data structures or your feature set. That is not to say you cannot do both. An API is almost always some combination of those two areas of focus. Stay Ahead Of The Needs We started by mentioning that you can build a solution and then provide an API, or you can design the API before implementation. You will find it far more challenging to "tack on" an API after implementation. There are often assumptions made within an application that will not be valid once you open its doors via an API.
IT Manager Podcast (DE, german) - IT-Begriffe einfach und verständlich erklärt
Heute dreht sich alles um das Thema: „Was ist eigentlich eine API?” Der Begriff API stammt aus dem Englischen und ist die Kurzform von „Application-Programming-Interface". Frei ins Deutsche übersetzt bedeutet das, so viel wie „Schnittstelle zur Anwendungsprogrammierung“. Allerdings wird die API umgangssprachlich meistens als Programmierschnittstelle bezeichnet und ist für das moderne Programmieren unerlässlich. Generell dienen Programmierschnittstellen zur Kommunikation, zum Austausch sowie zur Weiterverarbeitung von Daten und Inhalten zwischen verschiedenen Geräten, Programmen, Websites oder Anbietern. So ermöglichen APIs zum Beispiel die Fernsteuerung der heimischen Heizung, indem sie die Temperaturwerte, die von einem Thermostat gemessenen wurden, über eine Cloud zum Smartphone des Endverbrauchers übermitteln. Im Gegensatz zu einer Benutzerschnittstelle kommunizieren die Anwendungen direkt miteinander und nicht der Menschen mit einem System. Mal angenommen ein Internetnutzer bestellt einen Fernseher im Webshop. Dann kommuniziert er nur über die Weboberfläche des Onlineshops. Das Shopsystem selbst kann über die Programmierschnittstelle die Kundenbonität erfragen, die Zahlung per Kreditkarte oder Paypal veranlassen, bei einer Versicherung eine Garantieverlängerung abschließen und einen Spediteur beauftragen. In anderen Unternehmensprozessen gewährleisten Programmierschnittstellen beispielsweise den Austausch sowie die Weiterverarbeitung von Daten und Inhalten zwischen Customer Relationship Management (CRM), Dokumentenmanagementsystem (DMS) und Enterprise-Resource-Planning (ERP) über die Cloud. Aber wie funktioniert die API nun genau? Die Programmierschnittstelle dient wie bereits erwähnt dazu, Soft- und Hardwarekomponenten wie Anwendungen, Festplatten oder Benutzeroberflächen zu verbinden, sprich mit einer API können zwei Anwendungen, die voneinander unabhängig sind, problemlos interagieren und Inhalte, Ressourcen und Daten austauschen. Die Übergabe von Daten und Befehlen erfolgt dabei strukturiert nach einer zuvor definierten Syntax*. Hierzu werden einzelne Programmteile, die spezifische Funktionen erfüllen, vom Rest des Programmes abgekapselt. Die Module kommunizieren untereinander lediglich über die festgelegte API. Dort werden ausschließlich die Daten übergeben, die für den weiteren Programmablauf benötigt werden. Die API definiert dabei, in welcher Form Daten vom Anwendungsmodul entgegengenommen und wieder zurückgegeben werden. Der eigentliche Programmablauf innerhalb des Moduls ist für die API unwichtig. Im Gegensatz zu einer Binärschnittstelle, kurz ABI, findet in der API die Programmanbindung rein auf der Quelltext-Ebene statt. Zur Bereitstellung einer API gehört meist eine ausführliche elektronische oder aber eine papiergebundene Dokumentation, in der die einzelnen Schnittstellenfunktionen, der genauen Syntax und die möglichen Parameter aufgeführt sind. Grundsätzlich können Programmierschnittstellen in verschiedene Typen unterteilt werden: funktionsorientierte APIs, dateiorientierte APIs, objektorientierte APIs, protokollorientierte APIs und die RESTful-API-Schnittstelle. Funktionsorientierte APIs: Die funktionsorientierten Programmierschnittstellen reagieren nur auf Funktionen, wahlweise mit oder ohne Rückgabewert. Auf den Aufruf einer Funktion folgt die Ausgabe eines Referenzwertes (Handle). Dieser Wert dient zum Aufruf weiterer Funktionen, sind alle Funktionen verfügbar, wird das Handle geschlossen. Dateiorientierte APIs: Die dateiorientierten Programmierschnittstellen werden über die Befehle open, read, write und close angesprochen. Gesendete Daten werden mit write geschrieben, sollen Daten empfangen werden, sind diese mit dem read-Kommando einzulesen. Objektorientierte APIs: Die objektorientierten Programmierschnittstelle adressieren definierte Schnittstellen-Pointer, was diesem Schnittstellentyp gegenüber funktionsorientierten Schnittstellen eine erhöhte Flexibilität verleiht. Oft wird mit den Daten auch eine Typ- oder Klassen-Bibliothek übergeben. Protokollorientierte APIs: Die protokollorientierten Programmierschnittstellen befinden sich in keinerlei Abhängigkeit von Betriebssystem oder Hardware, allerdings ist das Übergabeprotokoll stets neu zu implementieren. Um diesen Vorgang zu erleichtern, werden protokollorientierte Schnittstellen im Regelfall von interface- oder funktionsorientierten Schnittstellen gekapselt. RESTful-API-Schnittstelle: Die RESTful-API-Schnittstelle stellt einen Sonderfall dar. Hierbei handelt es sich um eine Schnittstelle für Anfragen im HTTP-Format, der Datenzugriff erfolgt über GET-, PUT-, POST- und DELETE-Befehle. Programmierschnittstellen sind mittlerweile in vielen Bereichen unterschiedlichster Softwareanwendungen vorzufinden. Im Web-Umfeld kommen APIs häufig bei Online-Shops und Content-Management-Systemen zum Einsatz. Es können dadurch unterschiedliche Bezahldienstleister, Online-Marktplätze, Shop-Bewertungssysteme oder Versanddienstleister und weitere Services mit wenig Aufwand standardisiert an die verschiedenen Systeme angebunden werden. Beispielsweise existieren APIs zu Services wie: Wikipedia, Google Maps, Facebook, Twitter, PayPal, DHL etc. Bevor wir nun zum Ende unseres heutigen Podcasts kommen, möchte ich auf die Frage eingehen: Welche Vorteile ergeben sich nun durch die Verwendung von Programmierschnittstellen? Einer der vielen Vorteile ist es, die Entwicklung von Software zu vereinfachen, indem komplizierte und zeitaufwendige Aufgaben automatisiert werden. Das bedeutet, dass bestimmte Aufgaben, die mehrere Stunden in Anspruch nehmen, nun in wenigen Sekunden durchgeführt werden können. Zudem sind die angebundenen Programme weniger fehleranfällig und leichter wartbar, da modulare Programmcodes verwendet werden. Arbeiten einzelne Funktionen fehlerhaft, müssen lediglich die betroffenen Module und die an der API übergebenen Daten genauer geprüft werden. Ein weiterer Vorteil, dass sich aus einer sauber dokumentierten Programmierschnittstelle ergibt, ist die Möglichkeit der Auslagerung von Programmierarbeiten. Die Entwicklung einzelner Teilbereiche einer Software kann dank der Programmierschnittstelle mit geringem Aufwand an eine externe Softwareunternehmen oder freie Entwickler übertragen werden. Zudem können Drittanbieter selbst Funktionen für das System entwickeln. Dadurch steigert sich die Attraktivität und Flexibilität des Gesamtprodukts und es ergeben sich klare Wettbewerbsvorteile. Allerdings gibt es neben den genannten Vorteilen auch Nachteile: Um andere Anwendungen, Festplatten oder Benutzeroberflächen individuell anbinden zu können, braucht man Programmierkenntnisse. Hat man also keinen Entwickler im Unternehmen, muss man jemanden extern beauftragen. Außerdem benötigt das Entwickeln einer Anbindung Zeit. Kontakt: Ingo Lücker, ingo.luecker@itleague.de
You’d be forgiven for dismissing APIs as tech jargon, but the truth is we’d be lost without them. If you’ve ever booked a hotel online, ordered a food delivery or done your weekly supermarket shop on your laptop, you will have used an API (an Application Programming Interface). It’s the code that allows information to be passed from one user to another - and without it, the internet as we know it would struggle to exist. APIs are already being used in the NHS, but could they be used more to help transform the way that healthcare is delivered? And what are the ethical considerations that need to be taken into account with regard to the use of patient data? In this episode of Healthy Thinking, Chief Executive of Life Sciences Hub Wales, Cari-Anne Quinn speaks to three people leading the charge in the development of APIs in order to imagine what comes next. Charlotte Nielsen is a technical specialist at IBM. She looks after the API use cases for the company across the UK. Charlotte tells Cari-Anne that the NHS should open up its data to developers in order to truly realise the full potential of APIs: “Obviously in a very secure way and in an appropriate way, especially in the healthcare industry. But in order to innovate, you have to open up that data to whoever it might be that it's relevant to do the job right.“There are people out there who are innovating in their garage. Who are creating applications that end up being world-renowned organisations - the Ubers of this world.” Gary Bullock is Director of Application Development and Support at the NHS Wales Informatics Service. NWIS is already experimenting with APIs: “We did some work … on demographics, on reference data, on diagnostic results and observations. And we also have examples of suppliers who have patient platforms connecting to the national architecture and receiving diagnostic results using those technologies - and using that to care for patients in the live setting.” Also on the panel is Mark Wardle, a clinician and chair of the Welsh Technical Standards Board. He tells Cari-Anne: “It's just so exciting to step back and reimagine how we can provide health with the digital tools that we now have available. And we just need the ambition to do that.”
API é um conjunto de instruções, rotinas e padrões de programação usadas para que se possa acessar um aplicativo baseado na internet. Com isso, é possível que um computador ou outro aplicativo “entenda” as instruções deste aplicativo, interprete seus dados e possa usá-los para integração com outras plataformas e softwares, gerando novas instruções que serão executadas por esses softwares ou computadores. – Ah, tá… bacana, hein? Já sei o que é API. Mas como isso pode ajudar as pessoas em seu dia a dia? Na verdade, não só pessoas, como empresas e diversos outros tipos de organizações podem se beneficiar do uso de APIs. Mas calma, vamos ver primeiro o que significam cada uma dessas letras: API é a abreviatura de Application-Programming Interface, em Inglês, ou Interface de Programação de Aplicação, em português. Bom, agora que os fundamentos estão claros, vamos à parte que interessa: como APIs auxiliam pessoas e empresas? Link do post: https://pluga.co/blog/api/o-que-e-api/.
FreeBSD Foundation September Update, tiny C lib for programming Unix daemons, EuroBSDcon trip reports, GhostBSD tested on real hardware, and a BSD auth module for duress. ##Headlines FreeBSD Foundation Update, September 2018 MESSAGE FROM THE EXECUTIVE DIRECTOR Dear FreeBSD Community Member, It is hard to believe that September is over. The Foundation team had a busy month promoting FreeBSD all over the globe, bug fixing in preparation for 12.0, and setting plans in motion to kick off our 4th quarter fundraising and advocacy efforts. Take a minute to see what we’ve been up to and please consider making a donation to help us continue our efforts supporting FreeBSD! September 2018 Development Projects Update In preparation for the release of FreeBSD 12.0, I have been working on investigating and fixing a backlog of kernel bug reports. Of course, this kind of work is never finished, and we will continue to make progress after the release. In the past couple of months I have fixed a combination of long-standing issues and recent regressions. Of note are a pair of UNIX domain socket bugs which had been affecting various applications for years. In particular, Chromium tabs would frequently hang unless a workaround was manually applied to the system, and the bug had started affecting recent versions of Firefox as well. Fixing these issues gave me an opportunity to revisit and extend our regression testing for UNIX sockets, which, in turn, resulted in some related bugs being identified and fixed. Of late I have also been investigating reports of issues with ZFS, particularly, those reported on FreeBSD 11.2. A number of regressions, including a kernel memory leak and issues with ARC reclamation, have already been fixed for 12.0; investigation of other reports is ongoing. Those who closely follow FreeBSD-CURRENT know that some exciting work to improve memory usage on NUMA systems is now enabled by default. As is usually the case when new code is deployed in a diverse array of systems and workloads, a number of problems since have been identified. We are working on resolving them as soon as possible to ensure the quality of the release. I’m passionate about maintaining FreeBSD’s stability and dependability as it continues to expand and grow new features, and I’m grateful to the FreeBSD Foundation for sponsoring this work. We depend on users to report problems to the mailing lists and via the bug tracker, so please try running the 12.0 candidate builds and help us make 12.0 a great release. Fundraising Update: Supporting the Project It’s officially Fall here at Foundation headquarters and we’re heading full-steam into our final fundraising campaign of the year. We couldn’t even have begun to reach our funding goal of $1.25 million dollars without the support from the companies who have partnered with us this year. Thank you to Verisign for becoming a Silver Partner. They now join a growing list of companies like Xiplink, NetApp, Microsoft, Tarsnap, VMware, and NeoSmart Technologies that are stepping up and showing their commitment to FreeBSD! Funding from commercial users like these and individual users like yourself, help us continue our efforts of supporting critical areas of FreeBSD such as: Operating System Improvements: Providing staff to immediately respond to urgent problems and implement new features and functionality allowing for the innovation and stability you’ve come to rely on. Security: Providing engineering resources to bolster the capacity and responsiveness of the Security team providing your users with piece of mind when security issues arise. Release Engineering: Continue providing a full-time release engineer, resulting in timely and reliable releases you can plan around. Quality Assurance: Improving and increasing test coverage, continuous integration, and automated testing with a full-time software engineer to ensure you receive the highest quality, secure, and reliable operating system. New User Experience: Improving the process and documentation for getting new people involved with FreeBSD, and supporting those people as they become integrated into the FreeBSD Community providing the resources you may need to get new folks up to speed. Training: Supporting more FreeBSD training for undergraduates, graduates, and postgraduates. Growing the community means reaching people and catching their interest in systems software as early as possible and providing you with a bigger pool of candidates with the FreeBSD skills you’re looking for. Face-to-Face Opportunities: Facilitating collaboration among members of the community, and building connections throughout the industry to support a healthy and growing ecosystem and make it easier for you to find resources when questions emerge . We can continue the above work, if we meet our goal this year! If your company uses FreeBSD, please consider joining our growing list of 2018 partners. If you haven’t made your donation yet, please consider donating today. We are indebted to the individual donors, and companies listed above who have already shown their commitment to open source. Thank you for supporting FreeBSD and the Foundation! September 2018 Release Engineering Update The FreeBSD Release Engineering team continued working on the upcoming 12.0 RELEASE. At present, the 12.0 schedule had been adjusted by one week to allow for necessary works-in-progress to be completed. Of note, one of the works-in-progress includes updating OpenSSL from 1.0.2 to 1.1.1, in order to avoid breaking the application binary interface (ABI) on an established stable branch. Due to the level of non-trivial intrusiveness that had already been discovered and addressed in a project branch of the repository, it is possible (but not yet definite) that the schedule will need to be adjusted by another week to allow more time for larger and related updates for this particular update. Should the 12.0-RELEASE schedule need to be adjusted at any time during the release cycle, the schedule on the FreeBSD project website will be updated accordingly. The current schedule is available at: https://www.freebsd.org/releases/12.0R/schedule.html BSDCam 2018 Trip Report: Marie Helene Kvello-Aune I’d like to start by thanking the FreeBSD Foundation for sponsoring my trip to BSDCam(bridge) 2018. I wouldn’t have managed to attend otherwise. I’ve used FreeBSD in both personal and professional deployments since the year 2000, and over the last few years I have become more involved with development and documentation. I arrived in Gatwick, London at midnight. On Monday, August 13, I took the train to Cambridge, and decided to do some touristy activities as I walked from the train station to Churchill College. I ran into Allan outside the hotel right before the sky decided it was time for a heavy rainfall. Monday was mostly spent settling in, recouping after travel, and hanging out with Allan, Brad, Will and Andy later in the afternoon/evening. Read more… Continuous Integration Update The FreeBSD Foundation has sponsored the development of the Project’s continuous integration system, available at https://ci.FreeBSD.org, since June. Over the summer, we improved both the software and hardware infrastructure, and also added some new jobs for extending test coverage of the -CURRENT and -STABLE branches. Following are some highlights. New Hardware The Foundation purchased 4 new build machines for scaling up the computation power for the various test jobs. These newer, faster machines substantially speed up the time it takes to test amd64 builds, so that failing changes can be identified more quickly. Also, in August, we received a donation of 2 PINE A64-LTS boards from PINE64.org, which will be put in the hardware test lab as one part of the continuous tests. CI Staging Environment We used hardware from a previous generation CI system to build a staging environment for the CI infrastructure, which is available at https://ci-dev.freebsd.org. It executes the configurations and scripts from the “staging” branch of the FreeBSD-CI repository, and the development feature branches. We also use it to experiment with the new version of the jenkins server and plugins. Having a staging environment avoids affecting the production CI environment, reducing downtime. Mail Notification In July, we turned on failure notification for all the kernel and world build jobs. Committers will receive email containing the build information and failure log to inform them of possible problems with their modification on certain architectures. For amd64 of the -CURRENT branch, we also enabled the notification on failing regression test cases. Currently mail is sent only to the individual committers, but with help from postmaster team, we have created a dev-ci mailing list and will soon be also sending notifications there. New Test Job In August, we updated the embedded script of the virtual machine image. Originally it only executed pre-defined tests, but now this behavior can be modified by the data on the attached disk. This mechanism is used for adding new ZFS tests jobs. We are also working on analyzing and fixing the failing and skipped test cases. Work in Progress In August and September, we had two developer summits, one in Cambridge, UK and one in Bucharest, Romania. In these meetings, we discussed running special tests, such as ztest, which need a longer run time. We also planned the network testing for TCP/IP stack ###Daemonize - a Tiny C Library for Programming the UNIX Daemons Whatever they say, writing System-V style UNIX daemons is hard. One has to follow many rules to make a daemon process behave correctly on diverse UNIX flavours. Moreover, debugging such a code might be somewhat tricky. On the other hand, the process of daemon initialisation is rigid and well defined so the corresponding code has to be written and debugged once and later can be reused countless number of times. Developers of BSD UNIX were very aware of this, as there a C library function daemon() was available starting from version 4.4. The function, although non-standard, is present on many UNIXes. Unfortunately, it does not follow all the required steps to reliably run a process in the background on systems which follow System-V semantics (e.g. Linux). The details are available at the corresponding Linux man page. The main problem here, as I understand it, is that daemon() does not use the double-forking technique to avoid the situation when zombie processes appear. Whenever I encounter a problem like this one, I know it is time to write a tiny C library which solves it. This is exactly how ‘daemonize’ was born (GitHub mirror). The library consists of only two files which are meant to be integrated into the source tree of your project. Recently I have updated the library and realised that it would be good to describe how to use it on this site. If for some reason you want to make a Windows service, I have a battle tested template code for you as well. System-V Daemon Initialisation Procedure To make discussion clear we shall quote the steps which have to be performed during a daemon initialisation (according to daemon(7) manual page on Linux). I do it to demonstrate that this task is more tricky than one might expect. So, here we go: Close all open file descriptors except standard input, output, and error (i.e. the first three file descriptors 0, 1, 2). This ensures that no accidentally passed file descriptor stays around in the daemon process. On Linux, this is best implemented by iterating through /proc/self/fd, with a fallback of iterating from file descriptor 3 to the value returned by getrlimit() for RLIMITNOFILE. Reset all signal handlers to their default. This is best done by iterating through the available signals up to the limit of _NSIG and resetting them to SIGDFL. Reset the signal mask using sigprocmask(). Sanitize the environment block, removing or resetting environment variables that might negatively impact daemon runtime. Call fork(), to create a background process. In the child, call setsid() to detach from any terminal and create an independent session. In the child, call fork() again, to ensure that the daemon can never re-acquire a terminal again. Call exit() in the first child, so that only the second child (the actual daemon process) stays around. This ensures that the daemon process is re-parented to init/PID 1, as all daemons should be. In the daemon process, connect /dev/null to standard input, output, and error. In the daemon process, reset the umask to 0, so that the file modes passed to open(), mkdir() and suchlike directly control the access mode of the created files and directories. In the daemon process, change the current directory to the root directory (/), in order to avoid that the daemon involuntarily blocks mount points from being unmounted. In the daemon process, write the daemon PID (as returned by getpid()) to a PID file, for example /run/foobar.pid (for a hypothetical daemon “foobar”) to ensure that the daemon cannot be started more than once. This must be implemented in race-free fashion so that the PID file is only updated when it is verified at the same time that the PID previously stored in the PID file no longer exists or belongs to a foreign process. In the daemon process, drop privileges, if possible and applicable. From the daemon process, notify the original process started that initialization is complete. This can be implemented via an unnamed pipe or similar communication channel that is created before the first fork() and hence available in both the original and the daemon process. Call exit() in the original process. The process that invoked the daemon must be able to rely on that this exit() happens after initialization is complete and all external communication channels are established and accessible. The discussed library does most of the above-mentioned initialisation steps as it becomes immediately evident that implementation details for some of them heavily dependent on the internal logic of an application itself, so it is not possible to implement them in a universal library. I believe it is not a flaw, though, as the missed parts are safe to implement in an application code. The Library’s Application Programming Interface The generic programming interface was loosely modelled after above-mentioned BSD’s daemon() function. The library provides two user available functions (one is, in fact, implemented on top of the other) as well as a set of flags to control a daemon creation behaviour. Conclusion The objective of the library is to hide all the trickery of programming a daemon so you could concentrate on the more creative parts of your application. I hope it does this well. If you are not only interested in writing a daemon, but also want to make yourself familiar with the techniques which are used to accomplish that, the source code is available. Moreover, I would advise anyone, who starts developing for a UNIX environment to do that, as it shows many intricacies of programming for these platforms. ##News Roundup EuroBSDCon 2018 travel report and obligatory pics This was my first big BSD conference. We also planned - planned might be a big word - thought about doing a devsummit on Friday. Since the people who were in charge of that had a change of plans, I was sure it’d go horribly wrong. The day before the devsummit and still in the wrong country, I mentioned the hours and venue on the wiki, and booked a reservation for a restaurant. It turns out that everything was totally fine, and since the devsummit was at the conference venue (that was having tutorials that day), they even had signs pointing at the room we were given. Thanks EuroBSDCon conference organizers! At the devsummit, we spent some time hacking. A few people came with “travel laptops” without access to anything, like Riastradh, so I gave him access to my own laptop. This didn’t hold very long and I kinda forgot about it, but for a few moments he had access to a NetBSD source tree and an 8 thread, 16GB RAM machine with which to build things. We had a short introduction and I suggested we take some pictures, so here’s the ones we got. A few people were concerned about privacy, so they’re not pictured. We had small team to hold the camera :-) At the actual conference days, I stayed at the speaker hotel with the other speakers. I’ve attempted to make conversation with some visibly FreeBSD/OpenBSD people, but didn’t have plans to talk about anything, so there was a lot of just following people silently. Perhaps for the next conference I’ll prepare a list of questions to random BSD people and then very obviously grab a piece of paper and ask, “what was…”, read a bit from it, and say, “your latest kernel panic?”, I’m sure it’ll be a great conversation starter. At the conference itself, was pretty cool to have folks like Kirk McKusick give first person accounts of some past events (Kirk gave a talk about governance at FreeBSD), or the second keynote by Ron Broersma. My own talk was hastily prepared, it was difficult to bring the topic together into a coherent talk. Nevertheless, I managed to talk about stuff for a while 40 minutes, though usually I skip over so many details that I have trouble putting together a sufficiently long talk. I mentioned some of my coolest bugs to solve (I should probably make a separate article about some!). A few people asked for the slides after the talk, so I guess it wasn’t totally incoherent. It was really fun to meet some of my favourite NetBSD people. I got to show off my now fairly well working laptop (it took a lot of work by all of us!). After the conference I came back with a conference cold, and it took a few days to recover from it. Hopefully I didn’t infect too many people on the way back. ###GhostBSD tested on real hardware T410 – better than TrueOS? You might have heard about FreeBSD which is ultimately derived from UNIX back in the days. It is not Linux even though it is similar in many ways because Linux was designed to follow UNIX principles. Seeing is believing, so check out the video of the install and some apps as well! Nowadays if you want some of that BSD on your personal desktop how to go about? Well there is a full package or distro called GhostBSD which is based on FreeBSD current with a Mate or XFCE desktop preconfigured. I did try another package called TrueOS before and you can check out my blog post as well. Let’s give it a try on my Lenovo ThinkPad T410. You can download the latest version from ghostbsd.org. Creating a bootable USB drive was surprisingly difficult as rufus did not work and created a corrupted drive. You have to follow this procedure under Windows: download the 2.5GB .iso file and rename the extension to .img. Download Win32 Disk imager and burn the img file to an USB drive and boot from it. You will be able to start a live session and use the onboard setup to install GhostBSD unto a disk. I did encounter some bugs or quirks along the way. The installer failed the first time for some unknown reason but worked on the second attempt. The first boot stopped upon initialization of the USB3 ports (the T410 does not have USB3) but I could use some ‘exit’ command line magic to continue. The second boot worked fine. Audio was only available through headphones, not speakers but that could partially be fixed using the command line again. Lot’s of installed apps did not show up in the start menu and on goes the quirks list. Overall it is still better than TrueOS for me because drivers did work very well and I could address most of the existing bugs. On the upside: Free and open source FreeBSD package ready to go Mate or XFCE desktop (Mate is the only option for daily builds) Drivers work fine including LAN, WiFi, video 2D & 3D, audio, etc UFS or ZFS advanced file systems available Some downsides: Less driver and direct app support than Linux Installer and desktop have some quirks and bugs App-store is cumbersome, inferior to TrueOS ##Beastie Bits EuroBSDCon 2018 and NetBSD sanitizers New mandoc feature: -T html -O toc EuroBSDcon 2018 Polish BSD User Group garbage[43]: What year is it? The Demo @ 50 Microsoft ports DTrace from FreeBSD to Windows 10 OpenBSD joins Twitter NetBSD curses ripoffline improvements FCP-0101: Deprecating most 10/100 Ethernet drivers Announcing the pkgsrc-2018Q3 release Debian on OpenBSD vmd (without qemu or another debian system) A BSD authentication module for duress passwords (Joshua Stein) Disk Price/Performance Analysis ##Feedback/Questions DJ - Zombie ZFS Josua - arm tier 1? how to approach it -Gamah - 5ghz Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Designing a web API (or Application Programming Interface) that lives on a webserver can be very difficult. There's a lot to consider when building an API. While we frequently try to simplify the process down to where it feels the same as making a simple library for our own use, this approach really doesn't get us where we need to be. Read more › The post API Best Practices appeared first on Complete Developer Podcast.
Many people have a vague or incorrect idea of what the fairly common term "API" means. Heads up: it's not a type of beer! Petr lays out the basic details of an application programming interface in plain English so you'll never be confused again. Written by Petr Gazarov: https://twitter.com/PetrGazarov Read by Abbey Rennemeyer: https://twitter.com/abbeyrenn Original article: https://fcc.im/2FHPHer Learn to code for free at: https://www.freecodecamp.org Intro music by Vangough: https://fcc.im/2APOG02 Transcript: Before I learned software development, API sounded like a kind of beer. Today I use the term so often that I have in fact recently tried to order an API at a bar. The bartender’s response was to throw a 404: resource not found. I meet lots of people, both working in tech and elsewhere, who have a rather vague or incorrect idea about what this fairly common term means. Technically, API stands for Application Programming Interface. At some point or another, most large companies have built APIs for their customers, or for internal use. But how do you explain API in plain English? And is there a broader meaning than the one used in development and business? First, let’s pull back and look at how the web itself works. WWW and remote servers When I think about the Web, I imagine a large network of connected servers. Every page on the internet is stored somewhere on a remote server. A remote server is not so mystical after all — it’s just a part of a remotely located computer that is optimized to process requests. To put things in perspective, you can spin up a server on your laptop capable of serving an entire website to the Web (in fact, a local server is what engineers use to develop websites before releasing them to the public). When you type www.facebook.com into your browser, a request goes out to Facebook’s remote server. Once your browser receives the response, it interprets the code and displays the page. To the browser, also known as the client, Facebook’s server is an API. This means that every time you visit a page on the Web, you interact with some remote server’s API. An API isn’t the same as the remote server — rather it is the part of the server that receives requests and sends responses. APIs as a way to serve your customers You’ve probably heard of companies packaging APIs as products. For example, Weather Underground sells access to its weather data API. Example scenario: Your small business’s website has a form used to sign clients up for appointments. You want to give your clients the ability to automatically create a Google calendar event with the details for that appointment. API use: The idea is to have your website’s server talk directly to Google’s server with a request to create an event with the given details. Your server would then receive Google’s response, process it, and send back relevant information to the browser, such as a confirmation message to the user. Alternatively, your browser can often send an API request directly to Google’s server bypassing your server. How is this Google Calendar’s API different from the API of any other remote server out there? In technical terms, the difference is the format of the request and the response. To render the whole web page, your browser expects a response in HTML, which contains presentational code, while Google Calendar’s API call would just return the data — likely in a format like JSON. If your website’s server is making the API request, then your website’s server is the client (similar to your browser being the client when you use it to navigate to a website). From your users perspective, APIs allow them to complete the action without leaving your website. Most modern websites consume at least some third-party APIs. Many problems already have a third-party solution, be it in the form of a library or service. It’s often just easier and more reliable to use an existing solution. It’s not uncommon for development teams to break up their application into multiple servers that talk to each other via APIs. The servers that perform helper functions for the main application server are commonly referred to as microservices. To summarize, when a company offers an API to their customers, it just means that they’ve built a set of dedicated URLs that return pure data responses — meaning the responses won’t contain the kind of presentational overhead that you would expect in a graphical user interface like a website. Can you make these requests with your browser? Often, yes. Since the actual HTTP transmission happens in text, your browser will always do the best it can to display the response. For example, you can access GitHub’s API directly with your browser without even needing an access token. Here’s the JSON response you get when you visit a GitHub user’s API route in your browser (https://api.github.com/users/petrgazarov). The browser seems to have done just fine displaying a JSON response. A JSON response like this is ready for use in your code. It‘s easy to extract data from this text. Then you can do whatever you want with the data. A is for “Application” To close off, let’s throw in a couple more examples of APIs. “Application” can refer to many things. Here are some of them in the context of API: A piece of software with a distinct function. The whole server, the whole app, or just a small part of an app. Basically any piece of software that can be distinctively separated from its environment, can be an “A” in API, and will probably also have some sort of API. Let’s say you’re using a third-party library in your code. Once incorporated into your code, a library becomes part of your overall app. Being a distinct piece of software, the library would likely have an API which allows it to interact with the rest of your code. Here’s another example: In Object Oriented Design, code is organized into objects. Your application may have hundreds of objects defined that can interact with one another. Each object has an API — a set of public methods and properties that it uses to interact with other objects in your application. An object may also have inner logic that is private, meaning that it’s hidden from the outside scope (and not an API). From what we have covered, I hope you take away the broader meaning of API as well as the more common uses of the term today.
¡Muy buenos días a todos! Algunos me habéis enviado correos electrónicos preguntándome el nivel de los cursos y yo os lo repito. Son cursos básicos, para la gente que empieza, para aquellos que se acaban de topar con este sector y que hace falta explicarles las bases. De hecho, hay muchas personas que en muchos casos también deberían repasar las bases para poder recordar cosas que después me quedo abrumado cuando se les preguntan. Cosas como: ¿El Ask es el precio de la compra o el de la venta? ¿La demanda es más cara que la oferta o al revés? Para esas personas, siempre va bien tener un apoyo en vídeo y explicativo para repasar todos esos conceptos que como digo, son de base. Bien, pues volviendo al tema de hoy, hoy vengo a traeros una explicación detallada de las APIs en el mundo financiero y es que desde hace unos años…de hecho unos cuantos, la informática desembarcó con fuerza en el sector financiero, así como en otros sectores donde el uso de la informática ha hecho mella. Y es que este sector, como en otros, podemos comprobar que se ha incrementado en volumen, en participantes y en diferentes plataformas de acceso gracias a la tecnología. De hecho, gracias a esto, ahora todo el mundo puede acceder a ver el precio de una acción, un futuro, un índice o lo que sea casi a tiempo real. La informática que muchos no son arduos, otros son expertos y otros nos defendemos, sirve para infinidad de cosas y una de ellas es la obtención y trato de datos. Una de las cosas que más me gusta de este mundo tecnológico es que el acceso a información es cada vez más amplio, fácil y sobre todo diverso. Ahora existen maneras para obtener desde el tiempo en Kuala Lumpur o comprar en Amazon. Y todo se hace gracias a una tecnología de comunicación entre diferentes plataformas o servicios. Esta plataforma o protocolo es llamado API. Bien, de hecho, y para ser puristas, una API, tal y como dice la Wikipedia pertenece a las siglas de Application Programming Interface y no deja de ser un conjunto de subrutinas, funciones y procedimientos que ofrece cierta biblioteca para ser utilizado por otro software como una capa de abstracción. Para mí, es como explicar a un niño como usar la lengua. Es decir, una API para mí no deja de ser un protocolo que usamos para comunicarnos, para obtener información de alguien, aunque en este caso ese alguien es un ordenador, un servidor o una máquina a miles de kilómetros. O simplemente a metros. El hecho de tener internet en el móvil, Tablet, ordenador o en la tele, nos permite un acceso inmediato y es que este uso no solo nos permite llegar a la fuente de información, sino que también nos permite procesar información y eso, en los tiempos que corren y como hemos visto con empresas que tienen mucha información, la información, los datos, son poder. Y es que como digo, hoy vengo a hablaros de las APIs. Más concretamente las APIs financieras. Hoy en día mucha gente se llena la boca de la palabra FINTECH y todo lo que rodea. Pero poca gente creo que sabe realmente el uso explícito de esta combinación de finanzas y tecnología. Bien, pues voy a daros algunas APIs que creo que tienen un valor muy alto en cuanto a obtención de datos a tiempo real y a datos importantes y que, en algunos casos, el uso de estas APIs nos permite llegar antes a los trades, entender qué está pasando o incluso poder hacer el propio trade. Vamos a empezar con las clásicas APIs de trading: – Oanda: Bueno, creo que es la más fácil de usar y una de las más completas a nivel de programación. Ellos te permiten hacer casi de todo: ver tu historial de la cuenta, te permiten hace trading a través de ella y además te permiten obtener datos en tiempo real, tick a tick de todos los activos que tienen en el bróker. Que, en este caso, la mayoría son de Forex. Cabe decir que el uso de la API obviamente se necesita un usuario registrado pero que puede ser una demo, permitiendo jugar con miles de datos que tienen a nivel histórico de múltiples pares y a la vez, te permite conseguir una manejabilidad espectacular gracias al uso de sus pequeños ejemplos que muestran, en diferentes lenguajes en su página web. La verdad, es que es un lujo siempre poder trabajar con este bróker. Te facilita la vida y a la vez, te permite hacer muchas cosas de forma gratuita. Un 10 por ellos. – Interactive Brokers es, para mí, el bróker por excelencia en cuanto a acciones y futuros. Para mí no hay bróker a nombrar tan importante como este. De hecho, para mi es tan importante que el uso de su API, si eres semiprofesional o profesional, es casi obligado. Con miles de ejemplos en internet, también podemos hacernos con una cuenta demo o real (recordad que a partir de 10.000€) y que nos ayuda a acceder a tiempo real a datos de sus servidores como lo hace Oanda, aunque con la amplia diferencia que el número de activos que se pueden usar, el número de productos financieros posibles con esta API es altamente incomparable. Hablo de centenares de activos de todas las clases y, además, del uso de un acceso directo, completo y muy robusto que miles de personas alrededor del mundo usan con sus propios algoritmos. Sin duda, si eres amante de este tipo de microservicios, usadla. Vale la pena. Obtención de datos a gran escala de muchas empresas, índices o valores un poco más internacionales: – Yahoo! y Google finance: lo agrupo porque no veo mucha diferencia. La verdad es que son servicios que, aunque sea un gran amante de Google y de sus productos, aquí tengo que nombrarlo, pero a la vez, criticarlo. Sé de muy buena tinta que Google tiene una división específica para finanzas y que tienen algoritmos financieros corriendo haciendo cruce de datos y ganan dinero de ello, como no. El hecho es que Google Finance les debe dar mucho dinero o muy poco, pero espero que lo hagan todo a través de su API. Igual que lo hace Yahoo!. Y digo esto porque las dos grandes compañías tienen un servicio web de finanzas que da pena. De hecho, aún me quedo corto. Me duele decir esto y criticarlo así, pero no entiendo como entidades tan grandes que se dedican a internet, tengan una web tan pobre, tan antigua y tan poco útil como las aplicadas a finanzas. Es por eso que os aviso. Si usáis este servicio de estas dos compañías, usad la API. Va espectacularmente genial. Rápida, efectiva y con una gran versatilidad de usos. Te permite hacer de todo con todo tipo de empresas de alrededor del mundo y eso si, gratuitas si son datos a final de día. En el caso que quieras actualizados al segundo, tienes que pasar por caja. Pero el hecho de esto es que han podido integrar sus servicios con sus famosos Google Drive y Spredsheet o Excel online. Esto nos permite conectar y tratar estos datos sin descargarnos nada en nuestro ordenador. Todo a distancia. Esto es una maravilla y nos permite hacer un swing trading estudiado de una o varias carteras solo con un Excel conectado a los datos de Google Finance. Probadla. Vale la pena para acciones e índices. APIs alternativas que usan algunos bancos – Saxo Bank usa un feed de datos de Twitter para obtener la información en primera instancia para conseguir la información como si de Bloomberg, de la CNN o de la BBC se tratase y es que el feed de Twitter, bien configurado puede hacernos llegar mucha información que nos ayuda a entender qué pasa en el mundo de una manera instantánea. Pero es que esto es como todo. Hace falta saber a quién seguro, configurarlo muy bien y, sobre todo, saber discernir la información verídica de la falsa ya que como sabéis, en Twitter hay mucha información que es de dudosa reputación y, sobre todo, como ya ha pasado alguna vez, depende de si han hackeado una cuenta y han empezado a publicar cosas como ya ha pasado en ciertas ocasiones. Estoy hablando del incidente que pasó hace unos meses atrás, en el cual hackearon la cuenta de una muy muy conocida televisión económica americana y se twittearon 2 tweets que hacían referencia a que el avión presidencial del presidente de los estados unidos, creo que por aquel entonces había Obama en el gobierno, se había estrellado. El majestuoso Air Force One estrellado. Pues supongo que alguno de vosotros diréis: ¿pero esto que tiene que ver con la economía? Pues imaginaos cuanta gente está siguiendo este tipo de feeds de datos que se movió el índice de referencia SP500 haciendo un poco de miedo entre algunas personas (o incluso puedo deducir que incluso hay máquinas detrás leyendo este tipo de datos.) para hacer mover el precio ante el miedo de un posible atentado presidencial. APIs provenientes de los bancos – Banc Sabadell: tenéis un api muy sencillo que permite conectarte a la cuenta bancaria tuya a través de unas credenciales que puede darte la posibilidad de crearte aplicaciones móviles por ejemplo para poder monetizar una idea que tengas o que realmente te apetezca conectar no sé, por ejemplo, una hoja Excel con tu banco para saber cuánto llevas gastado durante el mes y de qué manera. – BBVA: bueno, esta es la API más completa que he visto en un banco en España. De hecho, son varias APIs las que tienen y es que tiene 8 APIs diferentes divididas entre APIs sobre particulares, sobre empresa y sobre datos agregados en general. Obviamente los datos que muestran son datos que son anónimos y que precisamente se cuidan mucho en mostrarnos. No podremos ver nombres ni números de cuenta, pero si datos muy interesantes que nos dan informaciones como BBVA PayStats, la cual nos da estadísticas agregadas y anónimas de miles de transacciones (obviamente de las hechas con las tarjetas de BBVA y de las que se han usado a través de los TPVs que las tiendas tengan con el banco BBVA. Así podemos ver y analizar los hábitos de consumo de los clientes. O también podemos ver y tratar datos de tu propia cuenta. Vamos que han montado un ecosistema que lo que permite es conseguir el máximo de numero de datos para que tú los puedas tratar a tu antojo. ¡Y nada más por hoy! Espero que esto de las APIs os sea de interés y que os animéis a que empecéis a usarlas. Si os surgen dudas, por favor, escribidme al formulario de contacto de la página web. ¡Acordaros también de suscribiros al canal y de darme un me gusta en iVoox y 5 estrellas en iTunes! ¡Muchas gracias! ¡Hasta el lunes! La entrada 127. Programación en el mundo financiero aparece primero en Ferran P..
¡Muy buenos días a todos! Algunos me habéis enviado correos electrónicos preguntándome el nivel de los cursos y yo os lo repito. Son cursos básicos, para la gente que empieza, para aquellos que se acaban de topar con este sector y que hace falta explicarles las bases. De hecho, hay muchas personas que en muchos casos también deberían repasar las bases para poder recordar cosas que después me quedo abrumado cuando se les preguntan. Cosas como: ¿El Ask es el precio de la compra o el de la venta? ¿La demanda es más cara que la oferta o al revés? Para esas personas, siempre va bien tener un apoyo en vídeo y explicativo para repasar todos esos conceptos que como digo, son de base. Bien, pues volviendo al tema de hoy, hoy vengo a traeros una explicación detallada de las APIs en el mundo financiero y es que desde hace unos años…de hecho unos cuantos, la informática desembarcó con fuerza en el sector financiero, así como en otros sectores donde el uso de la informática ha hecho mella. Y es que este sector, como en otros, podemos comprobar que se ha incrementado en volumen, en participantes y en diferentes plataformas de acceso gracias a la tecnología. De hecho, gracias a esto, ahora todo el mundo puede acceder a ver el precio de una acción, un futuro, un índice o lo que sea casi a tiempo real. La informática que muchos no son arduos, otros son expertos y otros nos defendemos, sirve para infinidad de cosas y una de ellas es la obtención y trato de datos. Una de las cosas que más me gusta de este mundo tecnológico es que el acceso a información es cada vez más amplio, fácil y sobre todo diverso. Ahora existen maneras para obtener desde el tiempo en Kuala Lumpur o comprar en Amazon. Y todo se hace gracias a una tecnología de comunicación entre diferentes plataformas o servicios. Esta plataforma o protocolo es llamado API. Bien, de hecho, y para ser puristas, una API, tal y como dice la Wikipedia pertenece a las siglas de Application Programming Interface y no deja de ser un conjunto de subrutinas, funciones y procedimientos que ofrece cierta biblioteca para ser utilizado por otro software como una capa de abstracción. Para mí, es como explicar a un niño como usar la lengua. Es decir, una API para mí no deja de ser un protocolo que usamos para comunicarnos, para obtener información de alguien, aunque en este caso ese alguien es un ordenador, un servidor o una máquina a miles de kilómetros. O simplemente a metros. El hecho de tener internet en el móvil, Tablet, ordenador o en la tele, nos permite un acceso inmediato y es que este uso no solo nos permite llegar a la fuente de información, sino que también nos permite procesar información y eso, en los tiempos que corren y como hemos visto con empresas que tienen mucha información, la información, los datos, son poder. Y es que como digo, hoy vengo a hablaros de las APIs. Más concretamente las APIs financieras. Hoy en día mucha gente se llena la boca de la palabra FINTECH y todo lo que rodea. Pero poca gente creo que sabe realmente el uso explícito de esta combinación de finanzas y tecnología. Bien, pues voy a daros algunas APIs que creo que tienen un valor muy alto en cuanto a obtención de datos a tiempo real y a datos importantes y que, en algunos casos, el uso de estas APIs nos permite llegar antes a los trades, entender qué está pasando o incluso poder hacer el propio trade. Vamos a empezar con las clásicas APIs de trading: – Oanda: Bueno, creo que es la más fácil de usar y una de las más completas a nivel de programación. Ellos te permiten hacer casi de todo: ver tu historial de la cuenta, te permiten hace trading a través de ella y además te permiten obtener datos en tiempo real, tick a tick de todos los activos que tienen en el bróker. Que, en este caso, la mayoría son de Forex. Cabe decir que el uso de la API obviamente se necesita un usuario registrado pero que puede ser una demo, permitiendo jugar con miles de datos que tienen a nivel histórico de múltiples pares y a la vez, te permite conseguir una manejabilidad espectacular gracias al uso de sus pequeños ejemplos que muestran, en diferentes lenguajes en su página web. La verdad, es que es un lujo siempre poder trabajar con este bróker. Te facilita la vida y a la vez, te permite hacer muchas cosas de forma gratuita. Un 10 por ellos. – Interactive Brokers es, para mí, el bróker por excelencia en cuanto a acciones y futuros. Para mí no hay bróker a nombrar tan importante como este. De hecho, para mi es tan importante que el uso de su API, si eres semiprofesional o profesional, es casi obligado. Con miles de ejemplos en internet, también podemos hacernos con una cuenta demo o real (recordad que a partir de 10.000€) y que nos ayuda a acceder a tiempo real a datos de sus servidores como lo hace Oanda, aunque con la amplia diferencia que el número de activos que se pueden usar, el número de productos financieros posibles con esta API es altamente incomparable. Hablo de centenares de activos de todas las clases y, además, del uso de un acceso directo, completo y muy robusto que miles de personas alrededor del mundo usan con sus propios algoritmos. Sin duda, si eres amante de este tipo de microservicios, usadla. Vale la pena. Obtención de datos a gran escala de muchas empresas, índices o valores un poco más internacionales: – Yahoo! y Google finance: lo agrupo porque no veo mucha diferencia. La verdad es que son servicios que, aunque sea un gran amante de Google y de sus productos, aquí tengo que nombrarlo, pero a la vez, criticarlo. Sé de muy buena tinta que Google tiene una división específica para finanzas y que tienen algoritmos financieros corriendo haciendo cruce de datos y ganan dinero de ello, como no. El hecho es que Google Finance les debe dar mucho dinero o muy poco, pero espero que lo hagan todo a través de su API. Igual que lo hace Yahoo!. Y digo esto porque las dos grandes compañías tienen un servicio web de finanzas que da pena. De hecho, aún me quedo corto. Me duele decir esto y criticarlo así, pero no entiendo como entidades tan grandes que se dedican a internet, tengan una web tan pobre, tan antigua y tan poco útil como las aplicadas a finanzas. Es por eso que os aviso. Si usáis este servicio de estas dos compañías, usad la API. Va espectacularmente genial. Rápida, efectiva y con una gran versatilidad de usos. Te permite hacer de todo con todo tipo de empresas de alrededor del mundo y eso si, gratuitas si son datos a final de día. En el caso que quieras actualizados al segundo, tienes que pasar por caja. Pero el hecho de esto es que han podido integrar sus servicios con sus famosos Google Drive y Spredsheet o Excel online. Esto nos permite conectar y tratar estos datos sin descargarnos nada en nuestro ordenador. Todo a distancia. Esto es una maravilla y nos permite hacer un swing trading estudiado de una o varias carteras solo con un Excel conectado a los datos de Google Finance. Probadla. Vale la pena para acciones e índices. APIs alternativas que usan algunos bancos – Saxo Bank usa un feed de datos de Twitter para obtener la información en primera instancia para conseguir la información como si de Bloomberg, de la CNN o de la BBC se tratase y es que el feed de Twitter, bien configurado puede hacernos llegar mucha información que nos ayuda a entender qué pasa en el mundo de una manera instantánea. Pero es que esto es como todo. Hace falta saber a quién seguro, configurarlo muy bien y, sobre todo, saber discernir la información verídica de la falsa ya que como sabéis, en Twitter hay mucha información que es de dudosa reputación y, sobre todo, como ya ha pasado alguna vez, depende de si han hackeado una cuenta y han empezado a publicar cosas como ya ha pasado en ciertas ocasiones. Estoy hablando del incidente que pasó hace unos meses atrás, en el cual hackearon la cuenta de una muy muy conocida televisión económica americana y se twittearon 2 tweets que hacían referencia a que el avión presidencial del presidente de los estados unidos, creo que por aquel entonces había Obama en el gobierno, se había estrellado. El majestuoso Air Force One estrellado. Pues supongo que alguno de vosotros diréis: ¿pero esto que tiene que ver con la economía? Pues imaginaos cuanta gente está siguiendo este tipo de feeds de datos que se movió el índice de referencia SP500 haciendo un poco de miedo entre algunas personas (o incluso puedo deducir que incluso hay máquinas detrás leyendo este tipo de datos.) para hacer mover el precio ante el miedo de un posible atentado presidencial. APIs provenientes de los bancos – Banc Sabadell: tenéis un api muy sencillo que permite conectarte a la cuenta bancaria tuya a través de unas credenciales que puede darte la posibilidad de crearte aplicaciones móviles por ejemplo para poder monetizar una idea que tengas o que realmente te apetezca conectar no sé, por ejemplo, una hoja Excel con tu banco para saber cuánto llevas gastado durante el mes y de qué manera. – BBVA: bueno, esta es la API más completa que he visto en un banco en España. De hecho, son varias APIs las que tienen y es que tiene 8 APIs diferentes divididas entre APIs sobre particulares, sobre empresa y sobre datos agregados en general. Obviamente los datos que muestran son datos que son anónimos y que precisamente se cuidan mucho en mostrarnos. No podremos ver nombres ni números de cuenta, pero si datos muy interesantes que nos dan informaciones como BBVA PayStats, la cual nos da estadísticas agregadas y anónimas de miles de transacciones (obviamente de las hechas con las tarjetas de BBVA y de las que se han usado a través de los TPVs que las tiendas tengan con el banco BBVA. Así podemos ver y analizar los hábitos de consumo de los clientes. O también podemos ver y tratar datos de tu propia cuenta. Vamos que han montado un ecosistema que lo que permite es conseguir el máximo de numero de datos para que tú los puedas tratar a tu antojo. ¡Y nada más por hoy! Espero que esto de las APIs os sea de interés y que os animéis a que empecéis a usarlas. Si os surgen dudas, por favor, escribidme al formulario de contacto de la página web. ¡Acordaros también de suscribiros al canal y de darme un me gusta en iVoox y 5 estrellas en iTunes! ¡Muchas gracias! ¡Hasta el lunes! La entrada 127. Programación en el mundo financiero aparece primero en Ferran P..
https://www.patreon.com/makersunplugged Music Provided by: SineRider -- https://soundcloud.com/sinerider Makers Unplugged – Episode #3 – Hack the Dragon Episode Posted: 22 October 2016 Show Notes: 8 December 2016 CHRONOLOGICAL NOTES Original reason for creating the game [0:45] Phone IVR [0:58] Interactive Voice Response is the technology that allows you to interact with menus using your phone keys or voice prompts, for example on a customer service line. https://en.wikipedia.org/wiki/Interactive_voice_response DTMF Tones [2:24] Dual-Tone Multi-Frequency signaling is how touch-tone telephones translate the number you pressed into an electrical signal of two sine-waves superimposed that can be decoded using signal processing techniques. https://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling Beware of getting Rick Rolled! [3:22] https://www.youtube.com/watch?v=dQw4w9WgXcQ Original concept of text-to-speech game [3:50] Zork [4:11] Zork is one of the earliest interactive computer games: https://en.wikipedia.org/wiki/Zork MUD games [4:25] Multi-User Dungeon games are real-time multi-player games that are typically text-based. Some graphical MUD games do exist. https://en.wikipedia.org/wiki/MUD Google Speech API [4:50] An API is an Application Programming Interface and they are sets of functions that allow the creation of applications that access the features or data in another application or operating system. https://www.google.com/intl/en/chrome/demos/speech.html NLP [7:12] Natural Language Processing, a field related to human-computer interactions https://en.wikipedia.org/wiki/Natural_language_processing Next steps after text-to-speech limitations [7:56] Asterisk PBX and how phone networks work [8:14] Asterisk is a software product of PBX; or “private branch exchange,” which is the network that allows phones to connect to each other. http://www.asterisk.org/ https://en.wikipedia.org/wiki/Business_telephone_system#Private_branch_exchange 2600 Magazine [12:40] https://www.2600.com/ Development of a choose your adventure game [13:45] SIP Technology [14:48] Session Initiation Protocol, a protocol for signaling and communication often used in VoIP https://en.wikipedia.org/wiki/Session_Initiation_Protocol DID [16:55] Direct Inward Dialing is a service that allows a company with its own PBX to buy a block of phone numbers https://en.wikipedia.org/wiki/Direct_inward_dial Making of the game [19:00] Eli’s favorite room [28:48] First players and reception [32:05] Hack.RVA Access Badge Easter Egg [32:30] Skills USA [35:20] http://www.skillsusa.org/ Bugs, updates, and changes to the game [40:30] MGCP Firmware [44:05] Media Gateway Control Protocol, another signaling protocol (like SIP) that had early problems with the game https://en.wikipedia.org/wiki/Media_Gateway_Control_Protocol TFTP Server [46:08] Trivial File Transfer Protocol is an extremely basic file transfer system. It is easy to implement but has no security features. https://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol VLAN Path [46:28] Virtual Local Area Network https://en.wikipedia.org/wiki/Virtual_LAN Windows VM [47:30] A virtual Machine is an emulation of an operating system to experiment with code without risking your computer https://en.wikipedia.org/wiki/Virtual_machine
We live in the 'Information Age' where the creation and use of data is prolific. Data drives business operations, social media and more. Data is important, but my passion is for open data. That’s data that is freely available in data files that you can download and import into a spreadsheet or database. Another way is to connect to a special web service called an Application Programming Interface or API. I wrote the original Tesco Grocery API as it gave me a platform to try out new ideas with customers, as well as get third party developers involved in creating great experiences. I discuss the innovation and opportunities of open data with Stuart Coleman. Stuart was commercial director at the Open Data Institute, formed to promote and license open data in the UK. Stuart now invests in companies that take data sources and package them into value-add services for use by organisations for help with their business operations or forecasting. A good example is TransportAPI which makes available all train and bus timetables in the UK, mixed with realtime data as to where trains and buses are right now, as well as road and cycle route planning amongst other services. Think of all the live and archived data sources that needs! Given his experience, Stuart is a great to interview about open data, and what better place to go and meet him: His office is in a building wedged between London’s Euston Rail and Tube station, and Euston Bus Terminus!
The Disruptware Podcast: Online business | Lean startup | Internet Entrepreneur
Growing your business using APIs Video Transcript: Paul: Hi. It's Paul Clifford from Disruptware and I just want to talk about the four ways of increasing your revenue and growing your business using APIs. Now an API stands for an 'Application Programming Interface' and essentially all it is - is a way of two different applications […] The post Growing Your Business Using APIs appeared first on Disruptware.
Application Programming Interface may sound complicated, but this podcast will give you a simple understanding of how to incorporate the use of APIs as a way to save time and money by addressing basic questions, like “what is an API?” and “Why Use One?”