Podcasts about abstract syntax tree

  • 10PODCASTS
  • 11EPISODES
  • 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 1, 2022LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about abstract syntax tree

Latest podcast episodes about abstract syntax tree

Real Talk JavaScript
Episode 212: AG Grid with Stephen Cooper

Real Talk JavaScript

Play Episode Listen Later Dec 1, 2022 37:35


Recording date: Nov 10, 2022John Papa @John_PapaWard Bell @WardBellDan Wahlin @DanWahlinCraig Shoemaker @craigshoemakerStephen Cooper @ScooperdevBrought to you byAG GridIdeaBladeResources:AG GridTan Stack with Tanner Linsley on Web Rush 206Whimsy definitionGetting Started docs for AG GridUsing AG Grid with ReactUsing AG Grid with VueUsing AG Grid with Angular5 Open Source JavaScript GridsAG Grid GitHub repositoryWhat is an Abstract Syntax Tree? (AST)ESLint and TypeScript ASTsGlide Gear TeleprompterUsing Typescript to Auto Generate DocumentationTimejumps00:29 Welcome02:24 Guest introduction07:10 Helping developers work with AG Grid08:16 What are the pain points for using grids?09:49 Sponsor: AG Grid10:44 How do you determine sensible defaults for developers?12:49 What's the best route for giving feedback on AG Grid?14:31 How can users try out different features of AG Grid?19:17 How do you decide which mode to use?26:55 Advice for developers using a grid29:16 Sponsor: IdeaBlade30:11 Final thoughtsPodcast editing on this episode done by Chris Enns of Lemon Productions.

The Real Python Podcast
Creating a Python Code Completer & More Abstract Syntax Tree Projects

The Real Python Podcast

Play Episode Listen Later Sep 2, 2022 73:33


How does a code completion tool work? What is an Abstract Syntax Tree, and how is it created in Python? How does an AST help you write programs and projects that inspect and modify your Python code? This week on the show, Meredydd Luff, co-founder of Anvil, shares his PyCon talk, "Building a Python Code Completer."

Working Draft » Podcast Feed
Revision 514: ASTs, Linter und Security mit Frederik Braun

Working Draft » Podcast Feed

Play Episode Listen Later Feb 1, 2022 82:30


Mit Frederik Braun (Github, Twitter), Firefox-Security-Großmeister und Workingdraft-Dauergast (bekannt aus den Revisionen 447 und 452) beleuchteten Schepp und Peter diverse Aspekte rund um ASTs und Security. Schaunotizen [00:01:02] Linting und AST Zu Beginn klären wir erst mal, was ein Abstract Syntax Tree überhaupt ist und wie wir ihn mit dem AST explorer erforschen können. Es […]

Python Bytes
#265 Get asizeof pympler and muppy

Python Bytes

Play Episode Listen Later Jan 5, 2022 47:46


Watch the live stream: Watch on YouTube About the show Sponsored by us: Check out the courses over at Talk Python And Brian's book too! Special guest: Matt Kramer (@__matt_kramer__) Michael #1: Survey results Question 1: Question 2: In terms of too long, the “extras” section has started at these times in the last 4 episodes: 39m, 32m, 35m, and 33m ~= 34m on average Brian #2: Modern attrs API attrs overview now focus on using @define History of attrs article: import attrs, by Hynek predecessor was called characteristic. A discussion between Glyph and Hynek in 2015 about where to take the idea. attrs popularity takes off in 2016 after a post by Glyph: ‌The One Python Library Everyone Needs In 2017 people started wanting something like attrs in std library. Thus PEP 557 and dataclasses. Hynek, Eric Smith, and Guido discuss it at PyCon US 2017. dataclasses, with a subset of attrs functionality, was introduced in Python 3.7. Types take off. attrs starts supporting type hints as well, even before Python 3.7 Post 3.7, some people start wondering if they still need attrs, since they have dataclasses. @define, field() and other API improvements came with attrs 20.1.0 in 2020. attrs 21.3.0 released in December, with what Hynek calls “Modern attrs”. OG attrs: import attr @attr.s class Point: x = attr.ib() y = attr.ib() modern attrs: from attr import define @define class Point: x: int y: int Many reasons to use attrs listed in Why not…, which is an excellent read. why not dataclasses? less powerful than attrs, intentionally attrs has validators, converters, equality customization, … attrs doesn't force type annotation if you don't like them slots on by default, dataclasses only support slots in Python 3.10 and are off by default attrs can and will move faster See also comparisons with pydantic, named tuples, tuples, dicts, hand-written classes Matt #3: Crafting Interpreters Wanting to learn more about how Python works “under the hood”, I first read Anthony Shaw's CPython internals book A fantastic, detailed overview of how CPython is implemented Since I don't have a formal CS background, I found myself wanting to learn a bit more about the fundamentals Parsing, Tokenization, Bytecode, data structures, etc. Crafting Interpreters is an incredible book by Bob Nystrom (on Dart team at Google) Although not Python, you walk through the implementation of a dynamic, interpreted language from scratch Implement same language (called lox) in two interpreters First a direct evaluation of Abstract Syntax Tree, written in Java Second is a bytecode interpreter, written from the ground up in C, including a compiler Every line of code is in the book, it is incredibly well-written and beautifully rendered I highly recommend to anyone wanting to learn more about language design & implementation Michael #4: Yamele - A schema and validator for YAML via Andrew Simon A basic schema: name: str() age: int(max=200) height: num() awesome: bool() And some YAML that validates: name: Bill age: 26 height: 6.2 awesome: True Take a look at the Examples section for more complex schema ideas. ⚠️ Ensure that your schema definitions come from internal or trusted sources. Yamale does not protect against intentionally malicious schemas. Brian #5: pympler Inspired by something Bob Belderbos wrote about sizes of objects, I think. “Pympler is a development tool to measure, monitor and analyze the memory behavior of Python objects in a running Python application. By pympling a Python application, detailed insight in the size and the lifetime of Python objects can be obtained. Undesirable or unexpected runtime behavior like memory bloat and other “pymples” can easily be identified.” 3 separate modules for profiling asizeof module provides basic size information for one or several Python objects muppy is used for on-line monitoring of a Python application Class Tracker provides off-line analysis of the lifetime of selected Python objects. asizeof is what I looked at recently In contrast to sys.getsizeof, asizeof sizes objects recursively. You can use one of the asizeof functions to get the size of these objects and all associated referents: >>> from pympler import asizeof >>> obj = [1, 2, (3, 4), 'text'] >>> asizeof.asizeof(obj) 176 >>> print(asizeof.asized(obj, detail=1).format()) [1, 2, (3, 4), 'text'] size=176 flat=48 (3, 4) size=64 flat=32 'text' size=32 flat=32 1 size=16 flat=16 2 size=16 flat=16 “Function flatsize returns the flat size of a Python object in bytes defined as the basic size plus the item size times the length of the given object.” Matt #6: hvPlot Interactive hvPlot is a high-level plotting API that is part of the PyData ecosystem, built on HoloViews My colleague Phillip Rudiger recently gave a talk at PyData Global on a new .interactive feature Here's an announcement in the HoloViz forum Allows integration of widgets directly into pandas analysis pipeline (method-chain), so you can add interactivity to your notebook for exploratory data analysis, or serve it as a Panel app Gist & video by Marc Skov Madsen Extras Michael: Typora app, recommended! Congrats Will Got a chance to solve a race condition with Tenacity New project management at GitHub Matt: Check out new Anaconda Nucleus Community forums! We're hiring, and remote-first. Check out anaconda.com/careers Pre-compiled packages now available for Pyston We have an upcoming webinar from Martin Durant: When Your Big Problem is I/O Bound Joke:

The Swyx Mixtape
[Weekend Drop] Temporal — the iPhone of System Design

The Swyx Mixtape

Play Episode Listen Later Jul 25, 2021 23:35


This is the audio version of the essay I published on Monday.I'm excited to finally share why I've joined Temporal.io as Head of Developer Experience. It's taken me months to precisely pin down why I have been obsessed with Workflows in general and Temporal in particular.It boils down to 3 core opinions: Orchestration, Event Sourcing, and Workflows-as-Code.Target audience: product-focused developers who have some understanding of system design, but limited distributed systems experience and no familiarity with workflow engines30 Second PitchThe most valuable, mission-critical workloads in any software company are long-running and tie together multiple services. Because this work relies on unreliable networks and systems: You want to standardize timeouts and retries. You want offer "reliability on rails" to every team. Because this work is so important: You must never drop any work. You must log all progress. Because this work is complex: You want to easily model dynamic asynchronous logic... ...and reuse, test, version and migrate it. Finally, you want all this to scale. The same programming model going from small usecases to millions of users without re-platforming. Temporal is the best way to do all this — by writing idiomatic code known as "workflows".Requirement 1: OrchestrationSuppose you are executing some business logic that calls System A, then System B, and then System C. Easy enough right?But: System B has rate limiting, so sometimes it fails right away and you're just expected to try again some time later. System C goes down a lot — and when it does, it doesn't actively report a failure. Your program is perfectly happy to wait an infinite amount of time and never retry C. You could deal with B by just looping until you get a successful response, but that ties up compute resources. Probably the better way is to persist the incomplete task in a database and set a cron job to periodically retry the call.Dealing with C is similar, but with a twist. You still need B's code to retry the API call, but you also need another (shorter lived, independent) scheduler to place a reasonable timeout on C's execution time since it doesn't report failures when it goes down.Do this often enough and you soon realize that writing timeouts and retries are really standard production-grade requirements when crossing any system boundary, whether you are calling an external API or just a different service owned by your own team.Instead of writing custom code for timeout and retries for every single service every time, is there a better way? Sure, we could centralize it!We have just rediscovered the need for orchestration over choreography. There are various names for the combined A-B-C system orchestration we are doing — depending who you ask, this is either called a Job Runner, Pipeline, or Workflow.Honestly, what interests me (more than the deduplication of code) is the deduplication of infrastructure. The maintainer of each system no longer has to provision the additional infrastructure needed for this stateful, potentially long-running work. This drastically simplifies maintenance — you can shrink your systems down to as small as a single serverless function — and makes it easier to spin up new ones, with the retry and timeout standards you now expect from every production-grade service. Workflow orchestrators are "reliability on rails".But there's a risk of course — you've just added a centralized dependency to every part of your distributed system. What if it ALSO goes down?Requirement 2: Event SourcingThe work that your code does is mission critical. What does that really mean? We cannot drop anything. All requests to start work must either result in error or success - no "it was supposed to be running but got lost somewhere" mismatch in expectations. During execution, we must be able to resume from any downtime. If any part of the system goes down, we must be able to pick up where we left off. We need the entire history of what happened when, for legal compliance, in case something went wrong, or if we want to analyze metadata across runs. There are two ways to track all this state. The usual way starts with a simple task queue, and then adds logging:(async function workLoop() { const nextTask = taskQueue.pop() await logEvent('starting task:', nextTask.ID) try { await doWork(nextTask) // this could fail! catch (err) { await logEvent('reverting task:', nextTask.ID, err) taskQueue.push(nextTask) } await logEvent('completed task:', nextTask.ID) setTimeout(workLoop, 0) })() But logs-as-afterthought has a bunch of problems. The logging is not tightly paired with the queue updates. If it is possible for one to succeed but the other to fail, you either have unreliable logs or dropped work — unacceptable for mission critical work. This could also happen if the central work loop itself goes down while tasks are executing. At the local level, you can fix this with batch transactions. Between systems, you can create two-phase commits. But this is a messy business and further bloats your business code with a ton of boilerplate — IF (a big if) you have the discipline to instrument every single state change in your code. The alternative to logs-as-afterthought is logs-as-truth: If it wasn't logged, it didn't happen. This is also known as Event Sourcing. We can always reconstruct current state from an ever-growing list of eventHistory:(function workLoop() { const nextTask = reconcile(eventHistory, workStateMachine) doWorkAndLogHistory(nextTask, eventHistory) // transactional setTimeout(workLoop, 0) })() The next task is strictly determined by comparing the event history to a state machine (provided by the application developer). Work is either done and committed to history, or not at all.I've handwaved away a lot of heavy lifting done by reconcile and doWorkAndLogHistory. But this solves a lot of problems: Our logs are always reliable, since that is the only way we determine what to do next. We use transactional guarantees to ensure that work is either done and tracked, or not at all. There is no "limbo" state — at the worst case, we'd rather retry already-done work with idempotency keys than drop work. Since there is no implicit state in the work loop, it can be restarted easily on any downtime (or scaled horizontally for high load). Finally, with standardized logs in our event history, we can share observability and debugging tooling between users. You can also make an analogy to the difference between "filename version control" and git — Using event histories as your source of truth is comparable to a git repo that reflects all git commits to date.But there's one last problem to deal with - how exactly should the developer specify the full state machine?Requirement 3: Workflows-as-CodeThe prototypical workflow state machine is a JSON or YAML file listing a sequence of steps. But this abuses configuration formats for expressing code. it doesn't take long before you start adding features like conditional branching, loops, and variables, until you have an underspecified Turing complete "domain specific language" hiding out in your JSON/YAML schema.[ { "first_step": { "call": "http.get", "args": { "url": "https://www.example.com/callA" }, "result": "first_result" } }, { "where_to_jump": { "switch": [ { "condition": "${first_result.body.SomeField < 10}", "next": "small" }, { "condition": "${first_result.body.SomeField < 100}", "next": "medium" } ], "next": "large" } }, { "small": { "call": "http.get", "args": { "url": "https://www.example.com/SmallFunc" }, "next": "end" } }, { "medium": { "call": "http.get", "args": { "url": "https://www.example.com/MediumFunc" }, "next": "end" } }, { "large": { "call": "http.get", "args": { "url": "https://www.example.com/LargeFunc" }, "next": "end" } } ] This example happens to be from Google, but you can compare similar config-driven syntaxes from Argo, Amazon, and Airflow. The bottom line is you ultimately find yourself hand-writing the Abstract Syntax Tree of something you can read much better in code anyway:async function dataPipeline() { const { body: SomeField } = await httpGet("https://www.example.com/callA") if (SomeField < 10) { await httpGet("https://www.example.com/SmallFunc") } else if (SomeField < 100) { await httpGet("https://www.example.com/MediumFunc") } else { await httpGet("https://www.example.com/BigFunc") } } The benefit of using general purpose programming languages to define workflows — Workflows-as-Code — is that you get to the full set of tooling that is already available to you as a developer: from IDE autocomplete to linting to syntax highlighting to version control to ecosystem libraries and test frameworks. But perhaps the biggest benefit of all is the reduced need for context switching from your application language to the workflow language. (So much so that you could copy over code and get reliability guarantees with only minor modifications.)This config-vs-code debate arises in multiple domains: You may have encountered this problem in AWS provisioning (CloudFormation vs CDK/Pulumi) or CI/CD (debugging giant YAML files for your builds). Since you can always write code to interpret any declarative JSON/YAML DSL, the code layer offers a superset of capabilities.The Challenge of DIY SolutionsSo for our mission critical, long-running work, we've identified three requirements: We want an orchestration engine between services. We want to use event sourcing to track and resume system state. We want to write all this with code rather than config languages. Respectively, these solve the pain points of reliability boilerplate, implementing observability/recovery, and modeling arbitrary business logic.If you were to build this on your own: You can find an orchestration engine off the shelf, though few have a strong open source backing. You'd likely start with a logs-as-afterthought system, and accumulating inconsistencies over time until they are critical enough to warrant a rewrite to a homegrown event sourcing framework with stronger guarantees. As you generalize your system for more use cases, you might start off using a JSON/YAML config language, because that is easy to parse. If it were entrenched and large enough, you might create an "as Code" layer just as AWS did with AWS CDK, causing an impedance mismatch until you rip out the underlying declarative layer. Finally, you'd have to make your system scale for many users (horizontal scaling + load balancing + queueing + routing) and many developers (workload isolation + authentication + authorization + testing + code reuse).Temporal as the "iPhone solution"When Steve Jobs introduced the iPhone in 2007, he introduced it as "a widescreen iPod with touch controls, a revolutionary mobile phone, and a breakthrough internet communications device", before stunning the audience: "These are not three separate devices. This is ONE device."This is the potential of Temporal. Temporal has opinions on how to make each piece best-in-class, but the tight integration creates a programming paradigm that is ultimately greater than the sum of its parts: You can build a UI that natively understands workflows as potentially infinitely long running business logic, exposing retry status, event history, and code input/outputs. You can build workflow migration tooling that verifies that old-but-still-running workflows have been fully accounted for when migrating to new code. You can add pluggable persistence so that you are agnostic to what databases or even what cloud you use, helping you be cloud-agnostic. You can run polyglot teams — each team can work in their ideal language, and only care about serializable inputs/outputs when calling each other, since event history is language-agnostic. There are more possibilities I can't talk about yet. The Business Case for TemporalA fun anecdote about how I got the job: through blogging.While exploring the serverless ecosystem at Netlify and AWS, I always had the nagging feeling that it was incomplete and that the most valuable work was always "left as an exercise to the reader". The feeling crystallized when I rewatched DHH's 2005 Ruby on Rails demo and realized that there was no way the serverless ecosystem could match up to it. We broke up the monolith to scale it, but there were just too many pieces missing.I started analyzing cloud computing from a "Jobs to Be Done" framework and wrote two throwaway blogposts called Cloud Operating Systems and Reconstituting the Monolith. My ignorant posting led to an extended comment from a total internet stranger telling me all the ways I was wrong. Lenny Pruss, who was ALSO reading my blogpost, saw this comment, and got Ryland to join Temporal as Head of Product, and he then turned around and pitched (literally pitched) me to join.One blogpost, two jobs. Learn in Public continues to amaze me by the luck it creates.Still, why would I quit a comfy, well-paying job at Amazon to work harder for less money at a startup like this? Extraordinary people. At its core, betting on any startup is betting on the people. The two cofounders of Temporal have been working on variants of this problem for over a decade each at AWS, Microsoft, and Uber. They have attracted an extremely high caliber team around them, with centuries of distributed systems experience. I report to the Head of Product, who is one of the fastest scaling executives Sequoia has ever seen. Extraordinary adoption. Because it reinvents service orchestration, Temporal (and its predecessor Cadence) is very horizontal by nature. Descript uses it for audio transcription, Snap uses it for ads reporting, Hashicorp uses it for infrastructure provisioning, Stripe uses it for the workflow engine behind Stripe Capital and Billing, Coinbase uses it for cryptocurrency transactions, Box uses it for file transfer, Datadog uses it for CI/CD, DoorDash uses it for delivery creation, Checkr uses it for background checks. Within each company, growth is viral; once one team sees successful adoption, dozens more follow suit within a year, all through word of mouth. Extraordinary results. After migrating, Temporal users report production issues falling from once-a-week to near-zero. Accidental double-spends have been discovered and fixed, saving millions in cold hard cash. Teams report being able to move faster, thanks to testing, code reuse, and standardized reliability. While the value of this is hard to quantify, it is big enough that users organically tell their friends and list Temporal in their job openings. Huge potential market growth. The main thing you bet on when it comes to Temporal is that its primary competition really is homegrown workflow systems, not other engines like Airflow, AWS Step Functions, and Camunda BPMN. In other words, even though Temporal should gain market share, the real story is market growth, driven by the growing microservices movement and developer education around best-in-class orchestration. At AWS and Netlify, I always felt like there was a missing capability in building serverless-first apps — duct-taping functions and cronjobs and databases to do async work — and it all fell into place the moment I saw Temporal. I'm betting that there are many, many people like me, and that I can help Temporal reach them. High potential value capture. Apart from market share and market growth, any open source project has the additional challenge of value capture, since users can self-host at any time. I mostly subscribe to David Ulevitch's take that open source SaaS is basically outsourcing ops. I haven't talked about Temporal's underlying architecture but it has quite a few moving parts and takes a lot of skill and system understanding to operate. For reasons I won't get into, Temporal scales best on Cassandra and that alone is enough to make most want to pay someone else to handle it. Great expansion opportunities. Temporal is by nature the most direct source of truth on the most valuable, mission critical workflows of any company that adopts it. It can therefore develop the most mission critical dashboard and control panel. Any source of truth also becomes a natural aggregation point for integrations, leaving open the possibility of an internal or third party service marketplace. With the Signals and Queries features, Temporal easily gets data in and out of running workflows, making it an ideal foundation for the sort of human-in-the-loop work for the API Economy. Imagine toggling just one line of code to A/B test vendors and APIs, or have Temporal learn while a domain expert manually executes decision processes and take over when it has seen enough. As a "high-code" specialist in reliable workflows, it could be a neutral arms dealer in the "low-code" gold rush, or choose to get into that game itself. If you want to get really wild, the secure distributed execution model of Workflow workers could be facilitated by an ERC-20 token. (to be clear... everything listed here is personal speculation and not the company roadmap) There is much work to do, though. Temporal Cloud needs a lot of automation and scaling before it becomes generally available. Temporal's UI is in the process of a full rewrite. Temporal's docs need a lot more work to fully explain such a complex system with many use cases. Temporal still doesn't have a production-ready Node.js or Python SDK. And much, much, more to do before Temporal's developer experience becomes accessible to the majority of developers.If what I've laid out excites you, take a look at our open positions (or write in your own!), and join the mailing list!Further Reading Orchestration Yan Cui's guide to Orchestration vs Choreography InfoQ: Coupling Microservices - a non-Temporal focused discussion of Orchestration A Netflix Guide to Microservices Event Sourcing Martin Fowler on Event Sourcing Kickstarter's guide to Event Sourcing Code over Config ACloudGuru's guide to Terraform, CloudFormation, and AWS CDK Serverless Workflow's comparison of Workflow specification formats Temporal Dealing with failure - when to use Workflows The macro problem with microservices - Temporal in context of microservices Designing A Workflow Engine from First Principles - Temporal Architecture Principles Writing your first workflow - 20min code video Case studies and External Resources from our users

Javascript to Elm
20: Building a compiler. Part 1

Javascript to Elm

Play Episode Listen Later Jan 11, 2018 25:08


Something I’ve wanted to know more about, having missed out on the usual opportunity during a CS degree, is the process of a compiler. How we get from source code to running byte code on the other end. The process of writing Elm that compiles to JavaScript using a compiler written in Haskell, was the tipping point for me. Compound that with webpack walking the Abstract Syntax Tree, and learning the term ‘lexical scope’ from Kyle Simpson in his YDYJS series.

Working Draft » Podcast Feed
Revision 317: Post-PostCSS

Working Draft » Podcast Feed

Play Episode Listen Later Nov 20, 2017 55:44


Stefan, Hans und Schepp geben Erfahrungsberichte zu dem in Revision 224 zum ersten Mal vorgestellten PostCSS. Schaunotizen [00:00:17] PostCSS PostCSS ist gleichzeitig wenig und auch vieles. Das Framework selbst bietet nicht mehr und nicht weniger als eine Möglichkeit CSS in einen Abstract Syntax Tree zu verwandeln, und das später wieder in CSS zu verwandeln. Erst […]

Take Up Code
207: Trees: AST: Abstract Syntax Tree. A Simple Example.

Take Up Code

Play Episode Listen Later Oct 30, 2017 16:43


An abstract syntax tree can help your code make sense of what a user provides.

Devchat.tv Master Feed
089 AiA Angular CLI with Ciro Nunes

Devchat.tv Master Feed

Play Episode Listen Later Apr 21, 2016 54:49


02:11 - Ciro Nunes Introduction Twitter GitHub Blog 02:39 - Command-line Interface (CLI) 06:58 - Ciro’s Involvement with the CLI 08:10 - Features and Improvements for Angular 2 Ruby on Rails AST (Abstract Syntax Tree) Transformations NG6-starter 19:33 - Accessibility 26:36 - CLI Basics 28:11 - Testing 34:12 - Building a Production Pipeline 35:38 - GitHub Pages; Community Contribution Angular-cli The GDE Program Picks Star Wars: The Force Awakens (John) LEGO® Star Wars: The Force Awakens (John) ng-conf (John) AngleBrackets (John) Disturbed - The Sound Of Silence (Joe) The Hello World Podcast (Joe) Jurgen Van de Moere: How I feel about Angular 2 (Ciro) angular-cli (Ciro)  

All Angular Podcasts by Devchat.tv
089 AiA Angular CLI with Ciro Nunes

All Angular Podcasts by Devchat.tv

Play Episode Listen Later Apr 21, 2016 54:49


02:11 - Ciro Nunes Introduction Twitter GitHub Blog 02:39 - Command-line Interface (CLI) 06:58 - Ciro’s Involvement with the CLI 08:10 - Features and Improvements for Angular 2 Ruby on Rails AST (Abstract Syntax Tree) Transformations NG6-starter 19:33 - Accessibility 26:36 - CLI Basics 28:11 - Testing 34:12 - Building a Production Pipeline 35:38 - GitHub Pages; Community Contribution Angular-cli The GDE Program Picks Star Wars: The Force Awakens (John) LEGO® Star Wars: The Force Awakens (John) ng-conf (John) AngleBrackets (John) Disturbed - The Sound Of Silence (Joe) The Hello World Podcast (Joe) Jurgen Van de Moere: How I feel about Angular 2 (Ciro) angular-cli (Ciro)  

Adventures in Angular
089 AiA Angular CLI with Ciro Nunes

Adventures in Angular

Play Episode Listen Later Apr 21, 2016 54:49


02:11 - Ciro Nunes Introduction Twitter GitHub Blog 02:39 - Command-line Interface (CLI) 06:58 - Ciro’s Involvement with the CLI 08:10 - Features and Improvements for Angular 2 Ruby on Rails AST (Abstract Syntax Tree) Transformations NG6-starter 19:33 - Accessibility 26:36 - CLI Basics 28:11 - Testing 34:12 - Building a Production Pipeline 35:38 - GitHub Pages; Community Contribution Angular-cli The GDE Program Picks Star Wars: The Force Awakens (John) LEGO® Star Wars: The Force Awakens (John) ng-conf (John) AngleBrackets (John) Disturbed - The Sound Of Silence (Joe) The Hello World Podcast (Joe) Jurgen Van de Moere: How I feel about Angular 2 (Ciro) angular-cli (Ciro)