POPULARITY
Anthropic's Claude 3.5 Sonnet, the newest addition to the state-of-the-art Claude family of AI models, is more intelligent and one-fifth of the price of Claude 3 Opus. Claude 3.5 Sonnet, the latest and most capable model from artificial intelligence (AI) safety and research company Anthropic, is generally available today in Amazon Bedrock as part of Anthropic's Claude family of AI models. Amazon Bedrock offers the broadest selection of high-performing foundation models (FMs) from leading AI companies, along with the capabilities and enterprise security customers need to quickly build and deploy generative AI applications. Data from Anthropic shows that Claude 3.5 Sonnet sets a new industry standard for intelligence. According to Anthropic, the new model outperforms other models available today, including OpenAI's GPT-4o and Google's Gemini 1.5 Pro, as well as Anthropic's own previously most capable model, Claude 3 Opus, in areas including expert knowledge, coding, and complex reasoning. Claude 3.5 Sonnet offers better intelligence and speed than Opus at one-fifth of the price. View Claude 3.5 Sonnet benchmarks. What can Claude 3.5 Sonnet do? Claude 3.5 Sonnet is the ideal model for complex tasks such as context-sensitive customer support, orchestrating multi-step workflows, streamlining code translations, and creating revenue-generating user-facing applications. Claude 3.5 Sonnet is particularly adept at creative writing, representing a step-change in its ability to understand nuance and humour. The new model can produce high-quality written content with a more natural, human-like tone that sounds more authentic and relatable than previous versions of Claude. The multimodal Claude 3.5 Sonnet also excels at processing images with state-of-the-art vision, particularly when interpreting charts and graphs - helping to get faster, deeper insights from data. It can accurately decipher text from imperfect images - for example, poorly scanned documents - and in doing so, glean more insights than from text alone. Other Claude 3.5 Sonnet strengths include: Advanced coding capabilities: When instructed and provided with the relevant tools, autonomously writing, editing, and running code with sophisticated reasoning and advanced troubleshooting capabilities, offering best-in-class accuracy. Improved understanding of context: Handling intricate inquiries by understanding user context and orchestrating multi-step workflows. This enables round-the-clock support for customer service applications in particular, impressively fast response times, natural-sounding interactions, and significantly improved customer satisfaction. Enhanced data science & analysis capabilities: Augmenting human expertise in data science by navigating unstructured data, and leveraging multiple user-provided tools to generate insights. When given access to a coding environment, it produces high-quality statistical visualisations and actionable predictions, ranging from business strategies to real-time product trends. AWS customers across a range of industries have already been accessing the state-of-the-art Claude models through Amazon Bedrock, enhancing their ability to rapidly test, build, and deploy generative AI applications across their organisations. Improving customer connections: DoorDash, a technology company that connects consumers with their favorite local businesses in more than 30 countries worldwide, receives hundreds of thousands of requests for assistance through its contact centre each day. The company has incorporated Anthropic's Claude 3 models in Amazon Bedrock with Amazon Connect and Amazon Lex to build a generative AI contact centre solution to streamline customer support. By deploying Knowledge Bases for Amazon Bedrock as the foundation, DoorDash reduced generative AI application development time by 50%. The solution is currently fielding hundreds of thousands of support calls each day and has driven material reductions in call volumes, while t...
[00:00.000 --> 00:04.560] All right, so I'm here with 52 weeks of AWS[00:04.560 --> 00:07.920] and still continuing to do developer certification.[00:07.920 --> 00:11.280] I'm gonna go ahead and share my screen here.[00:13.720 --> 00:18.720] All right, so we are on Lambda, one of my favorite topics.[00:19.200 --> 00:20.800] Let's get right into it[00:20.800 --> 00:24.040] and talk about how to develop event-driven solutions[00:24.040 --> 00:25.560] with AWS Lambda.[00:26.640 --> 00:29.440] With Serverless Computing, one of the things[00:29.440 --> 00:32.920] that it is going to do is it's gonna change[00:32.920 --> 00:36.000] the way you think about building software[00:36.000 --> 00:39.000] and in a traditional deployment environment,[00:39.000 --> 00:42.040] you would configure an instance, you would update an OS,[00:42.040 --> 00:45.520] you'd install applications, build and deploy them,[00:45.520 --> 00:47.000] load balance.[00:47.000 --> 00:51.400] So this is non-cloud native computing and Serverless,[00:51.400 --> 00:54.040] you really only need to focus on building[00:54.040 --> 00:56.360] and deploying applications and then monitoring[00:56.360 --> 00:58.240] and maintaining the applications.[00:58.240 --> 01:00.680] And so with really what Serverless does[01:00.680 --> 01:05.680] is it allows you to focus on the code for the application[01:06.320 --> 01:08.000] and you don't have to manage the operating system,[01:08.000 --> 01:12.160] the servers or scale it and really is a huge advantage[01:12.160 --> 01:14.920] because you don't have to pay for the infrastructure[01:14.920 --> 01:15.920] when the code isn't running.[01:15.920 --> 01:18.040] And that's really a key takeaway.[01:19.080 --> 01:22.760] If you take a look at the AWS Serverless platform,[01:22.760 --> 01:24.840] there's a bunch of fully managed services[01:24.840 --> 01:26.800] that are tightly integrated with Lambda.[01:26.800 --> 01:28.880] And so this is another huge advantage of Lambda,[01:28.880 --> 01:31.000] isn't necessarily that it's the fastest[01:31.000 --> 01:33.640] or it has the most powerful execution,[01:33.640 --> 01:35.680] it's the tight integration with the rest[01:35.680 --> 01:39.320] of the AWS platform and developer tools[01:39.320 --> 01:43.400] like AWS Serverless application model or AWS SAM[01:43.400 --> 01:45.440] would help you simplify the deployment[01:45.440 --> 01:47.520] of Serverless applications.[01:47.520 --> 01:51.960] And some of the services include Amazon S3,[01:51.960 --> 01:56.960] Amazon SNS, Amazon SQS and AWS SDKs.[01:58.600 --> 02:03.280] So in terms of Lambda, AWS Lambda is a compute service[02:03.280 --> 02:05.680] for Serverless and it lets you run code[02:05.680 --> 02:08.360] without provisioning or managing servers.[02:08.360 --> 02:11.640] It allows you to trigger your code in response to events[02:11.640 --> 02:14.840] that you would configure like, for example,[02:14.840 --> 02:19.200] dropping something into a S3 bucket like that's an image,[02:19.200 --> 02:22.200] Nevel Lambda that transcribes it to a different format.[02:23.080 --> 02:27.200] It also allows you to scale automatically based on demand[02:27.200 --> 02:29.880] and it will also incorporate built-in monitoring[02:29.880 --> 02:32.880] and logging with AWS CloudWatch.[02:34.640 --> 02:37.200] So if you look at AWS Lambda,[02:37.200 --> 02:39.040] some of the things that it does[02:39.040 --> 02:42.600] is it enables you to bring in your own code.[02:42.600 --> 02:45.280] So the code you write for Lambda isn't written[02:45.280 --> 02:49.560] in a new language, you can write things[02:49.560 --> 02:52.600] in tons of different languages for AWS Lambda,[02:52.600 --> 02:57.600] Node, Java, Python, C-sharp, Go, Ruby.[02:57.880 --> 02:59.440] There's also custom run time.[02:59.440 --> 03:03.880] So you could do Rust or Swift or something like that.[03:03.880 --> 03:06.080] And it also integrates very deeply[03:06.080 --> 03:11.200] with other AWS services and you can invoke[03:11.200 --> 03:13.360] third-party applications as well.[03:13.360 --> 03:18.080] It also has a very flexible resource and concurrency model.[03:18.080 --> 03:20.600] And so Lambda would scale in response to events.[03:20.600 --> 03:22.880] So you would just need to configure memory settings[03:22.880 --> 03:24.960] and AWS would handle the other details[03:24.960 --> 03:28.720] like the CPU, the network, the IO throughput.[03:28.720 --> 03:31.400] Also, you can use the Lambda,[03:31.400 --> 03:35.000] AWS Identity and Access Management Service or IAM[03:35.000 --> 03:38.560] to grant access to what other resources you would need.[03:38.560 --> 03:41.200] And this is one of the ways that you would control[03:41.200 --> 03:44.720] the security of Lambda is you have really guardrails[03:44.720 --> 03:47.000] around it because you would just tell Lambda,[03:47.000 --> 03:50.080] you have a role that is whatever it is you need Lambda to do,[03:50.080 --> 03:52.200] talk to SQS or talk to S3,[03:52.200 --> 03:55.240] and it would specifically only do that role.[03:55.240 --> 04:00.240] And the other thing about Lambda is that it has built-in[04:00.560 --> 04:02.360] availability and fault tolerance.[04:02.360 --> 04:04.440] So again, it's a fully managed service,[04:04.440 --> 04:07.520] it's high availability and you don't have to do anything[04:07.520 --> 04:08.920] at all to use that.[04:08.920 --> 04:11.600] And one of the biggest things about Lambda[04:11.600 --> 04:15.000] is that you only pay for what you use.[04:15.000 --> 04:18.120] And so when the Lambda service is idle,[04:18.120 --> 04:19.480] you don't have to actually pay for that[04:19.480 --> 04:21.440] versus if it's something else,[04:21.440 --> 04:25.240] like even in the case of a Kubernetes-based system,[04:25.240 --> 04:28.920] still there's a host machine that's running Kubernetes[04:28.920 --> 04:31.640] and you have to actually pay for that.[04:31.640 --> 04:34.520] So one of the ways that you can think about Lambda[04:34.520 --> 04:38.040] is that there's a bunch of different use cases for it.[04:38.040 --> 04:40.560] So let's start off with different use cases,[04:40.560 --> 04:42.920] web apps, I think would be one of the better ones[04:42.920 --> 04:43.880] to think about.[04:43.880 --> 04:46.680] So you can combine AWS Lambda with other services[04:46.680 --> 04:49.000] and you can build powerful web apps[04:49.000 --> 04:51.520] that automatically scale up and down.[04:51.520 --> 04:54.000] And there's no administrative effort at all.[04:54.000 --> 04:55.160] There's no backups necessary,[04:55.160 --> 04:58.320] no multi-data center redundancy, it's done for you.[04:58.320 --> 05:01.400] Backends, so you can build serverless backends[05:01.400 --> 05:05.680] that lets you handle web, mobile, IoT,[05:05.680 --> 05:07.760] third-party applications.[05:07.760 --> 05:10.600] You can also build those backends with Lambda,[05:10.600 --> 05:15.400] with API Gateway, and you can build applications with them.[05:15.400 --> 05:17.200] In terms of data processing,[05:17.200 --> 05:19.840] you can also use Lambda to run code[05:19.840 --> 05:22.560] in response to a trigger, change in data,[05:22.560 --> 05:24.440] shift in system state,[05:24.440 --> 05:27.360] and really all of AWS for the most part[05:27.360 --> 05:29.280] is able to be orchestrated with Lambda.[05:29.280 --> 05:31.800] So it's really like a glue type service[05:31.800 --> 05:32.840] that you're able to use.[05:32.840 --> 05:36.600] Now chatbots, that's another great use case for it.[05:36.600 --> 05:40.760] Amazon Lex is a service for building conversational chatbots[05:42.120 --> 05:43.560] and you could use it with Lambda.[05:43.560 --> 05:48.560] Amazon Lambda service is also able to be used[05:50.080 --> 05:52.840] with voice IT automation.[05:52.840 --> 05:55.760] These are all great use cases for Lambda.[05:55.760 --> 05:57.680] In fact, I would say it's kind of like[05:57.680 --> 06:01.160] the go-to automation tool for AWS.[06:01.160 --> 06:04.160] So let's talk about how Lambda works next.[06:04.160 --> 06:06.080] So the way Lambda works is that[06:06.080 --> 06:09.080] there's a function and there's an event source,[06:09.080 --> 06:10.920] and these are the core components.[06:10.920 --> 06:14.200] The event source is the entity that publishes events[06:14.200 --> 06:19.000] to AWS Lambda, and Lambda function is the code[06:19.000 --> 06:21.960] that you're gonna use to process the event.[06:21.960 --> 06:25.400] And AWS Lambda would run that Lambda function[06:25.400 --> 06:29.600] on your behalf, and a few things to consider[06:29.600 --> 06:33.840] is that it really is just a little bit of code,[06:33.840 --> 06:35.160] and you can configure the triggers[06:35.160 --> 06:39.720] to invoke a function in response to resource lifecycle events,[06:39.720 --> 06:43.680] like for example, responding to incoming HTTP,[06:43.680 --> 06:47.080] consuming events from a queue, like in the case of SQS[06:47.080 --> 06:48.320] or running it on a schedule.[06:48.320 --> 06:49.760] So running it on a schedule is actually[06:49.760 --> 06:51.480] a really good data engineering task, right?[06:51.480 --> 06:54.160] Like you could run it periodically to scrape a website.[06:55.120 --> 06:58.080] So as a developer, when you create Lambda functions[06:58.080 --> 07:01.400] that are managed by the AWS Lambda service,[07:01.400 --> 07:03.680] you can define the permissions for the function[07:03.680 --> 07:06.560] and basically specify what are the events[07:06.560 --> 07:08.520] that would actually trigger it.[07:08.520 --> 07:11.000] You can also create a deployment package[07:11.000 --> 07:12.920] that includes application code[07:12.920 --> 07:17.000] in any dependency or library necessary to run the code,[07:17.000 --> 07:19.200] and you can also configure things like the memory,[07:19.200 --> 07:23.200] you can figure the timeout, also configure the concurrency,[07:23.200 --> 07:25.160] and then when your function is invoked,[07:25.160 --> 07:27.640] Lambda will provide a runtime environment[07:27.640 --> 07:30.080] based on the runtime and configuration options[07:30.080 --> 07:31.080] that you selected.[07:31.080 --> 07:36.080] So let's talk about models for invoking Lambda functions.[07:36.360 --> 07:41.360] In the case of an event source that invokes Lambda function[07:41.440 --> 07:43.640] by either a push or a pool model,[07:43.640 --> 07:45.920] in the case of a push, it would be an event source[07:45.920 --> 07:48.440] directly invoking the Lambda function[07:48.440 --> 07:49.840] when the event occurs.[07:50.720 --> 07:53.040] In the case of a pool model,[07:53.040 --> 07:56.960] this would be putting the information into a stream or a queue,[07:56.960 --> 07:59.400] and then Lambda would pull that stream or queue,[07:59.400 --> 08:02.800] and then invoke the function when it detects an events.[08:04.080 --> 08:06.480] So a few different examples would be[08:06.480 --> 08:11.280] that some services can actually invoke the function directly.[08:11.280 --> 08:13.680] So for a synchronous invocation,[08:13.680 --> 08:15.480] the other service would wait for the response[08:15.480 --> 08:16.320] from the function.[08:16.320 --> 08:20.680] So a good example would be in the case of Amazon API Gateway,[08:20.680 --> 08:24.800] which would be the REST-based service in front.[08:24.800 --> 08:28.320] In this case, when a client makes a request to your API,[08:28.320 --> 08:31.200] that client would get a response immediately.[08:31.200 --> 08:32.320] And then with this model,[08:32.320 --> 08:34.880] there's no built-in retry in Lambda.[08:34.880 --> 08:38.040] Examples of this would be Elastic Load Balancing,[08:38.040 --> 08:42.800] Amazon Cognito, Amazon Lex, Amazon Alexa,[08:42.800 --> 08:46.360] Amazon API Gateway, AWS CloudFormation,[08:46.360 --> 08:48.880] and Amazon CloudFront,[08:48.880 --> 08:53.040] and also Amazon Kinesis Data Firehose.[08:53.040 --> 08:56.760] For asynchronous invocation, AWS Lambda queues,[08:56.760 --> 09:00.320] the event before it passes to your function.[09:00.320 --> 09:02.760] The other service gets a success response[09:02.760 --> 09:04.920] as soon as the event is queued,[09:04.920 --> 09:06.560] and if an error occurs,[09:06.560 --> 09:09.760] Lambda will automatically retry the invocation twice.[09:10.760 --> 09:14.520] A good example of this would be S3, SNS,[09:14.520 --> 09:17.720] SES, the Simple Email Service,[09:17.720 --> 09:21.120] AWS CloudFormation, Amazon CloudWatch Logs,[09:21.120 --> 09:25.400] CloudWatch Events, AWS CodeCommit, and AWS Config.[09:25.400 --> 09:28.280] But in both cases, you can invoke a Lambda function[09:28.280 --> 09:30.000] using the invoke operation,[09:30.000 --> 09:32.720] and you can specify the invocation type[09:32.720 --> 09:35.440] as either synchronous or asynchronous.[09:35.440 --> 09:38.760] And when you use the AWS service as a trigger,[09:38.760 --> 09:42.280] the invocation type is predetermined for each service,[09:42.280 --> 09:44.920] and so you have no control over the invocation type[09:44.920 --> 09:48.920] that these events sources use when they invoke your Lambda.[09:50.800 --> 09:52.120] In the polling model,[09:52.120 --> 09:55.720] the event sources will put information into a stream or a queue,[09:55.720 --> 09:59.360] and AWS Lambda will pull the stream or the queue.[09:59.360 --> 10:01.000] If it first finds a record,[10:01.000 --> 10:03.280] it will deliver the payload and invoke the function.[10:03.280 --> 10:04.920] And this model, the Lambda itself,[10:04.920 --> 10:07.920] is basically pulling data from a stream or a queue[10:07.920 --> 10:10.280] for processing by the Lambda function.[10:10.280 --> 10:12.640] Some examples would be a stream-based event service[10:12.640 --> 10:17.640] would be Amazon DynamoDB or Amazon Kinesis Data Streams,[10:17.800 --> 10:20.920] and these stream records are organized into shards.[10:20.920 --> 10:24.640] So Lambda would actually pull the stream for the record[10:24.640 --> 10:27.120] and then attempt to invoke the function.[10:27.120 --> 10:28.800] If there's a failure,[10:28.800 --> 10:31.480] AWS Lambda won't read any of the new shards[10:31.480 --> 10:34.840] until the failed batch of records expires or is processed[10:34.840 --> 10:36.160] successfully.[10:36.160 --> 10:39.840] In the non-streaming event, which would be SQS,[10:39.840 --> 10:42.400] Amazon would pull the queue for records.[10:42.400 --> 10:44.600] If it fails or times out,[10:44.600 --> 10:46.640] then the message would be returned to the queue,[10:46.640 --> 10:49.320] and then Lambda will keep retrying the failed message[10:49.320 --> 10:51.800] until it's processed successfully.[10:51.800 --> 10:53.600] If the message will expire,[10:53.600 --> 10:56.440] which is something you can do with SQS,[10:56.440 --> 10:58.240] then it'll just be discarded.[10:58.240 --> 11:00.400] And you can create a mapping between an event source[11:00.400 --> 11:02.960] and a Lambda function right inside of the console.[11:02.960 --> 11:05.520] And this is how typically you would set that up manually[11:05.520 --> 11:07.600] without using infrastructure as code.[11:08.560 --> 11:10.200] All right, let's talk about permissions.[11:10.200 --> 11:13.080] This is definitely an easy place to get tripped up[11:13.080 --> 11:15.760] when you're first using AWS Lambda.[11:15.760 --> 11:17.840] There's two types of permissions.[11:17.840 --> 11:20.120] The first is the event source and permission[11:20.120 --> 11:22.320] to trigger the Lambda function.[11:22.320 --> 11:24.480] This would be the invocation permission.[11:24.480 --> 11:26.440] And the next one would be the Lambda function[11:26.440 --> 11:29.600] needs permissions to interact with other services,[11:29.600 --> 11:31.280] but this would be the run permissions.[11:31.280 --> 11:34.520] And these are both handled via the IAM service[11:34.520 --> 11:38.120] or the AWS identity and access management service.[11:38.120 --> 11:43.120] So the IAM resource policy would tell the Lambda service[11:43.600 --> 11:46.640] which push event the sources have permission[11:46.640 --> 11:48.560] to invoke the Lambda function.[11:48.560 --> 11:51.120] And these resource policies would make it easy[11:51.120 --> 11:55.280] to grant access to a Lambda function across AWS account.[11:55.280 --> 11:58.400] So a good example would be if you have an S3 bucket[11:58.400 --> 12:01.400] in your account and you need to invoke a function[12:01.400 --> 12:03.880] in another account, you could create a resource policy[12:03.880 --> 12:07.120] that allows those to interact with each other.[12:07.120 --> 12:09.200] And the resource policy for a Lambda function[12:09.200 --> 12:11.200] is called a function policy.[12:11.200 --> 12:14.160] And when you add a trigger to your Lambda function[12:14.160 --> 12:16.760] from the console, the function policy[12:16.760 --> 12:18.680] will be generated automatically[12:18.680 --> 12:20.040] and it allows the event source[12:20.040 --> 12:22.820] to take the Lambda invoke function action.[12:24.400 --> 12:27.320] So a good example would be in Amazon S3 permission[12:27.320 --> 12:32.120] to invoke the Lambda function called my first function.[12:32.120 --> 12:34.720] And basically it would be an effect allow.[12:34.720 --> 12:36.880] And then under principle, if you would have service[12:36.880 --> 12:41.880] S3.AmazonEWS.com, the action would be Lambda colon[12:41.880 --> 12:45.400] invoke function and then the resource would be the name[12:45.400 --> 12:49.120] or the ARN of actually the Lambda.[12:49.120 --> 12:53.080] And then the condition would be actually the ARN of the bucket.[12:54.400 --> 12:56.720] And really that's it in a nutshell.[12:57.560 --> 13:01.480] The Lambda execution role grants your Lambda function[13:01.480 --> 13:05.040] permission to access AWS services and resources.[13:05.040 --> 13:08.000] And you select or create the execution role[13:08.000 --> 13:10.000] when you create a Lambda function.[13:10.000 --> 13:12.320] The IAM policy would define the actions[13:12.320 --> 13:14.440] of Lambda functions allowed to take[13:14.440 --> 13:16.720] and the trust policy allows the Lambda service[13:16.720 --> 13:20.040] to assume an execution role.[13:20.040 --> 13:23.800] To grant permissions to AWS Lambda to assume a role,[13:23.800 --> 13:27.460] you have to have the permission for IAM pass role action.[13:28.320 --> 13:31.000] A couple of different examples of a relevant policy[13:31.000 --> 13:34.560] for an execution role and the example,[13:34.560 --> 13:37.760] the IAM policy, you know,[13:37.760 --> 13:39.840] basically that we talked about earlier,[13:39.840 --> 13:43.000] would allow you to interact with S3.[13:43.000 --> 13:45.360] Another example would be to make it interact[13:45.360 --> 13:49.240] with CloudWatch logs and to create a log group[13:49.240 --> 13:51.640] and stream those logs.[13:51.640 --> 13:54.800] The trust policy would give Lambda service permissions[13:54.800 --> 13:57.600] to assume a role and invoke a Lambda function[13:57.600 --> 13:58.520] on your behalf.[13:59.560 --> 14:02.600] Now let's talk about the overview of authoring[14:02.600 --> 14:06.120] and configuring Lambda functions.[14:06.120 --> 14:10.440] So really to start with, to create a Lambda function,[14:10.440 --> 14:14.840] you first need to create a Lambda function deployment package,[14:14.840 --> 14:19.800] which is a zip or jar file that consists of your code[14:19.800 --> 14:23.160] and any dependencies with Lambda,[14:23.160 --> 14:25.400] you can use the programming language[14:25.400 --> 14:27.280] and integrated development environment[14:27.280 --> 14:29.800] that you're most familiar with.[14:29.800 --> 14:33.360] And you can actually bring the code you've already written.[14:33.360 --> 14:35.960] And Lambda does support lots of different languages[14:35.960 --> 14:39.520] like Node.js, Python, Ruby, Java, Go,[14:39.520 --> 14:41.160] and.NET runtimes.[14:41.160 --> 14:44.120] And you can also implement a custom runtime[14:44.120 --> 14:45.960] if you wanna use a different language as well,[14:45.960 --> 14:48.480] which is actually pretty cool.[14:48.480 --> 14:50.960] And if you wanna create a Lambda function,[14:50.960 --> 14:52.800] you would specify the handler,[14:52.800 --> 14:55.760] the Lambda function handler is the entry point.[14:55.760 --> 14:57.600] And a few different aspects of it[14:57.600 --> 14:59.400] that are important to pay attention to,[14:59.400 --> 15:00.720] the event object,[15:00.720 --> 15:03.480] this would provide information about the event[15:03.480 --> 15:05.520] that triggered the Lambda function.[15:05.520 --> 15:08.280] And this could be like a predefined object[15:08.280 --> 15:09.760] that AWS service generates.[15:09.760 --> 15:11.520] So you'll see this, like for example,[15:11.520 --> 15:13.440] in the console of AWS,[15:13.440 --> 15:16.360] you can actually ask for these objects[15:16.360 --> 15:19.200] and it'll give you really the JSON structure[15:19.200 --> 15:20.680] so you can test things out.[15:21.880 --> 15:23.900] In the contents of an event object[15:23.900 --> 15:26.800] includes everything you would need to actually invoke it.[15:26.800 --> 15:29.640] The context object is generated by AWS[15:29.640 --> 15:32.360] and this is really a runtime information.[15:32.360 --> 15:35.320] And so if you needed to get some kind of runtime information[15:35.320 --> 15:36.160] about your code,[15:36.160 --> 15:40.400] let's say environmental variables or AWS request ID[15:40.400 --> 15:44.280] or a log stream or remaining time in Millies,[15:45.320 --> 15:47.200] like for example, that one would return[15:47.200 --> 15:48.840] the number of milliseconds that remain[15:48.840 --> 15:50.600] before your function times out,[15:50.600 --> 15:53.300] you can get all that inside the context object.[15:54.520 --> 15:57.560] So what about an example that runs a Python?[15:57.560 --> 15:59.280] Pretty straightforward actually.[15:59.280 --> 16:01.400] All you need is you would put a handler[16:01.400 --> 16:03.280] inside the handler would take,[16:03.280 --> 16:05.000] that it would be a Python function,[16:05.000 --> 16:07.080] it would be an event, there'd be a context,[16:07.080 --> 16:10.960] you pass it inside and then you return some kind of message.[16:10.960 --> 16:13.960] A few different best practices to remember[16:13.960 --> 16:17.240] about AWS Lambda would be to separate[16:17.240 --> 16:20.320] the core business logic from the handler method[16:20.320 --> 16:22.320] and this would make your code more portable,[16:22.320 --> 16:24.280] enable you to target unit tests[16:25.240 --> 16:27.120] without having to worry about the configuration.[16:27.120 --> 16:30.400] So this is always a really good idea just in general.[16:30.400 --> 16:32.680] Make sure you have modular functions.[16:32.680 --> 16:34.320] So you have a single purpose function,[16:34.320 --> 16:37.160] you don't have like a kitchen sink function,[16:37.160 --> 16:40.000] you treat functions as stateless as well.[16:40.000 --> 16:42.800] So you would treat a function that basically[16:42.800 --> 16:46.040] just does one thing and then when it's done,[16:46.040 --> 16:48.320] there is no state that's actually kept anywhere[16:49.320 --> 16:51.120] and also only include what you need.[16:51.120 --> 16:55.840] So you don't want to have a huge sized Lambda functions[16:55.840 --> 16:58.560] and one of the ways that you can avoid this[16:58.560 --> 17:02.360] is by reducing the time it takes a Lambda to unpack[17:02.360 --> 17:04.000] the deployment packages[17:04.000 --> 17:06.600] and you can also minimize the complexity[17:06.600 --> 17:08.640] of your dependencies as well.[17:08.640 --> 17:13.600] And you can also reuse the temporary runtime environment[17:13.600 --> 17:16.080] to improve the performance of a function as well.[17:16.080 --> 17:17.680] And so the temporary runtime environment[17:17.680 --> 17:22.280] initializes any external dependencies of the Lambda code[17:22.280 --> 17:25.760] and you can make sure that any externalized configuration[17:25.760 --> 17:27.920] or dependency that your code retrieves are stored[17:27.920 --> 17:30.640] and referenced locally after the initial run.[17:30.640 --> 17:33.800] So this would be limit re-initializing variables[17:33.800 --> 17:35.960] and objects on every invocation,[17:35.960 --> 17:38.200] keeping it alive and reusing connections[17:38.200 --> 17:40.680] like an HTTP or database[17:40.680 --> 17:43.160] that were established during the previous invocation.[17:43.160 --> 17:45.880] So a really good example of this would be a socket connection.[17:45.880 --> 17:48.040] If you make a socket connection[17:48.040 --> 17:51.640] and this socket connection took two seconds to spawn,[17:51.640 --> 17:54.000] you don't want every time you call Lambda[17:54.000 --> 17:55.480] for it to wait two seconds,[17:55.480 --> 17:58.160] you want to reuse that socket connection.[17:58.160 --> 18:00.600] A few good examples of best practices[18:00.600 --> 18:02.840] would be including logging statements.[18:02.840 --> 18:05.480] This is a kind of a big one[18:05.480 --> 18:08.120] in the case of any cloud computing operation,[18:08.120 --> 18:10.960] especially when it's distributed, if you don't log it,[18:10.960 --> 18:13.280] there's no way you can figure out what's going on.[18:13.280 --> 18:16.560] So you must add logging statements that have context[18:16.560 --> 18:19.720] so you know which particular Lambda instance[18:19.720 --> 18:21.600] is actually occurring in.[18:21.600 --> 18:23.440] Also include results.[18:23.440 --> 18:25.560] So make sure that you know it's happening[18:25.560 --> 18:29.000] when the Lambda ran, use environmental variables as well.[18:29.000 --> 18:31.320] So you can figure out things like what the bucket was[18:31.320 --> 18:32.880] that it was writing to.[18:32.880 --> 18:35.520] And then also don't do recursive code.[18:35.520 --> 18:37.360] That's really a no-no.[18:37.360 --> 18:40.200] You want to write very simple functions with Lambda.[18:41.320 --> 18:44.440] Few different ways to write Lambda actually would be[18:44.440 --> 18:46.280] that you can do the console editor,[18:46.280 --> 18:47.440] which I use all the time.[18:47.440 --> 18:49.320] I like to actually just play around with it.[18:49.320 --> 18:51.640] Now the downside is that if you don't,[18:51.640 --> 18:53.800] if you do need to use custom libraries,[18:53.800 --> 18:56.600] you're not gonna be able to do it other than using,[18:56.600 --> 18:58.440] let's say the AWS SDK.[18:58.440 --> 19:01.600] But for just simple things, it's a great use case.[19:01.600 --> 19:06.080] Another one is you can just upload it to AWS console.[19:06.080 --> 19:09.040] And so you can create a deployment package in an IDE.[19:09.040 --> 19:12.120] Like for example, Visual Studio for.NET,[19:12.120 --> 19:13.280] you can actually just right click[19:13.280 --> 19:16.320] and deploy it directly into Lambda.[19:16.320 --> 19:20.920] Another one is you can upload the entire package into S3[19:20.920 --> 19:22.200] and put it into a bucket.[19:22.200 --> 19:26.280] And then Lambda will just grab it outside of that S3 package.[19:26.280 --> 19:29.760] A few different things to remember about Lambda.[19:29.760 --> 19:32.520] The memory and the timeout are configurations[19:32.520 --> 19:35.840] that determine how the Lambda function performs.[19:35.840 --> 19:38.440] And these will affect the billing.[19:38.440 --> 19:40.200] Now, one of the great things about Lambda[19:40.200 --> 19:43.640] is just amazingly inexpensive to run.[19:43.640 --> 19:45.560] And the reason is that you're charged[19:45.560 --> 19:48.200] based on the number of requests for a function.[19:48.200 --> 19:50.560] A few different things to remember would be the memory.[19:50.560 --> 19:53.560] Like so if you specify more memory,[19:53.560 --> 19:57.120] it's going to increase the cost timeout.[19:57.120 --> 19:59.960] You can also control the memory duration of the function[19:59.960 --> 20:01.720] by having the right kind of timeout.[20:01.720 --> 20:03.960] But if you make the timeout too long,[20:03.960 --> 20:05.880] it could cost you more money.[20:05.880 --> 20:08.520] So really the best practices would be test the performance[20:08.520 --> 20:12.880] of Lambda and make sure you have the optimum memory size.[20:12.880 --> 20:15.160] Also load test it to make sure[20:15.160 --> 20:17.440] that you understand how the timeouts work.[20:17.440 --> 20:18.280] Just in general,[20:18.280 --> 20:21.640] anything with cloud computing, you should load test it.[20:21.640 --> 20:24.200] Now let's talk about an important topic[20:24.200 --> 20:25.280] that's a final topic here,[20:25.280 --> 20:29.080] which is how to deploy Lambda functions.[20:29.080 --> 20:32.200] So versions are immutable copies of a code[20:32.200 --> 20:34.200] in the configuration of your Lambda function.[20:34.200 --> 20:35.880] And the versioning will allow you to publish[20:35.880 --> 20:39.360] one or more versions of your Lambda function.[20:39.360 --> 20:40.400] And as a result,[20:40.400 --> 20:43.360] you can work with different variations of your Lambda function[20:44.560 --> 20:45.840] in your development workflow,[20:45.840 --> 20:48.680] like development, beta, production, et cetera.[20:48.680 --> 20:50.320] And when you create a Lambda function,[20:50.320 --> 20:52.960] there's only one version, the latest version,[20:52.960 --> 20:54.080] dollar sign, latest.[20:54.080 --> 20:57.240] And you can refer to this function using the ARN[20:57.240 --> 20:59.240] or Amazon resource name.[20:59.240 --> 21:00.640] And when you publish a new version,[21:00.640 --> 21:02.920] AWS Lambda will make a snapshot[21:02.920 --> 21:05.320] of the latest version to create a new version.[21:06.800 --> 21:09.600] You can also create an alias for Lambda function.[21:09.600 --> 21:12.280] And conceptually, an alias is just like a pointer[21:12.280 --> 21:13.800] to a specific function.[21:13.800 --> 21:17.040] And you can use that alias in the ARN[21:17.040 --> 21:18.680] to reference the Lambda function version[21:18.680 --> 21:21.280] that's currently associated with the alias.[21:21.280 --> 21:23.400] What's nice about the alias is you can roll back[21:23.400 --> 21:25.840] and forth between different versions,[21:25.840 --> 21:29.760] which is pretty nice because in the case of deploying[21:29.760 --> 21:32.920] a new version, if there's a huge problem with it,[21:32.920 --> 21:34.080] you just toggle it right back.[21:34.080 --> 21:36.400] And there's really not a big issue[21:36.400 --> 21:39.400] in terms of rolling back your code.[21:39.400 --> 21:44.400] Now, let's take a look at an example where AWS S3,[21:45.160 --> 21:46.720] or Amazon S3 is the event source[21:46.720 --> 21:48.560] that invokes your Lambda function.[21:48.560 --> 21:50.720] Every time a new object is created,[21:50.720 --> 21:52.880] when Amazon S3 is the event source,[21:52.880 --> 21:55.800] you can store the information for the event source mapping[21:55.800 --> 21:59.040] in the configuration for the bucket notifications.[21:59.040 --> 22:01.000] And then in that configuration,[22:01.000 --> 22:04.800] you could identify the Lambda function ARN[22:04.800 --> 22:07.160] that Amazon S3 can invoke.[22:07.160 --> 22:08.520] But in some cases,[22:08.520 --> 22:11.680] you're gonna have to update the notification configuration.[22:11.680 --> 22:14.720] So Amazon S3 will invoke the correct version each time[22:14.720 --> 22:17.840] you publish a new version of your Lambda function.[22:17.840 --> 22:21.800] So basically, instead of specifying the function ARN,[22:21.800 --> 22:23.880] you can specify an alias ARN[22:23.880 --> 22:26.320] in the notification of configuration.[22:26.320 --> 22:29.160] And as you promote a new version of the Lambda function[22:29.160 --> 22:32.200] into production, you only need to update the prod alias[22:32.200 --> 22:34.520] to point to the latest stable version.[22:34.520 --> 22:36.320] And you also don't need to update[22:36.320 --> 22:39.120] the notification configuration in Amazon S3.[22:40.480 --> 22:43.080] And when you build serverless applications[22:43.080 --> 22:46.600] as common to have code that's shared across Lambda functions,[22:46.600 --> 22:49.400] it could be custom code, it could be a standard library,[22:49.400 --> 22:50.560] et cetera.[22:50.560 --> 22:53.320] And before, and this was really a big limitation,[22:53.320 --> 22:55.920] was you had to have all the code deployed together.[22:55.920 --> 22:58.960] But now, one of the really cool things you can do[22:58.960 --> 23:00.880] is you can have a Lambda function[23:00.880 --> 23:03.600] to include additional code as a layer.[23:03.600 --> 23:05.520] So layer is basically a zip archive[23:05.520 --> 23:08.640] that contains a library, maybe a custom runtime.[23:08.640 --> 23:11.720] Maybe it isn't gonna include some kind of really cool[23:11.720 --> 23:13.040] pre-trained model.[23:13.040 --> 23:14.680] And then the layers you can use,[23:14.680 --> 23:15.800] the libraries in your function[23:15.800 --> 23:18.960] without needing to include them in your deployment package.[23:18.960 --> 23:22.400] And it's a best practice to have the smaller deployment packages[23:22.400 --> 23:25.240] and share common dependencies with the layers.[23:26.120 --> 23:28.520] Also layers will help you keep your deployment package[23:28.520 --> 23:29.360] really small.[23:29.360 --> 23:32.680] So for node, JS, Python, Ruby functions,[23:32.680 --> 23:36.000] you can develop your function code in the console[23:36.000 --> 23:39.000] as long as you keep the package under three megabytes.[23:39.000 --> 23:42.320] And then a function can use up to five layers at a time,[23:42.320 --> 23:44.160] which is pretty incredible actually,[23:44.160 --> 23:46.040] which means that you could have, you know,[23:46.040 --> 23:49.240] basically up to a 250 megabytes total.[23:49.240 --> 23:53.920] So for many languages, this is plenty of space.[23:53.920 --> 23:56.620] Also Amazon has published a public layer[23:56.620 --> 23:58.800] that includes really popular libraries[23:58.800 --> 24:00.800] like NumPy and SciPy,[24:00.800 --> 24:04.840] which does dramatically help data processing[24:04.840 --> 24:05.680] in machine learning.[24:05.680 --> 24:07.680] Now, if I had to predict the future[24:07.680 --> 24:11.840] and I wanted to predict a massive announcement,[24:11.840 --> 24:14.840] I would say that what AWS could do[24:14.840 --> 24:18.600] is they could have a GPU enabled layer at some point[24:18.600 --> 24:20.160] that would include pre-trained models.[24:20.160 --> 24:22.120] And if they did something like that,[24:22.120 --> 24:24.320] that could really open up the doors[24:24.320 --> 24:27.000] for the pre-trained model revolution.[24:27.000 --> 24:30.160] And I would bet that that's possible.[24:30.160 --> 24:32.200] All right, well, in a nutshell,[24:32.200 --> 24:34.680] AWS Lambda is one of my favorite services.[24:34.680 --> 24:38.440] And I think it's worth everybody's time[24:38.440 --> 24:42.360] that's interested in AWS to play around with AWS Lambda.[24:42.360 --> 24:47.200] All right, next week, I'm going to cover API Gateway.[24:47.200 --> 25:13.840] All right, see you next week.If you enjoyed this video, here are additional resources to look at:Coursera + Duke Specialization: Building Cloud Computing Solutions at Scale Specialization: https://www.coursera.org/specializations/building-cloud-computing-solutions-at-scalePython, Bash, and SQL Essentials for Data Engineering Specialization: https://www.coursera.org/specializations/python-bash-sql-data-engineering-dukeAWS Certified Solutions Architect - Professional (SAP-C01) Cert Prep: 1 Design for Organizational Complexity:https://www.linkedin.com/learning/aws-certified-solutions-architect-professional-sap-c01-cert-prep-1-design-for-organizational-complexity/design-for-organizational-complexity?autoplay=trueEssentials of MLOps with Azure and Databricks: https://www.linkedin.com/learning/essentials-of-mlops-with-azure-1-introduction/essentials-of-mlops-with-azureO'Reilly Book: Implementing MLOps in the EnterpriseO'Reilly Book: Practical MLOps: https://www.amazon.com/Practical-MLOps-Operationalizing-Machine-Learning/dp/1098103017O'Reilly Book: Python for DevOps: https://www.amazon.com/gp/product/B082P97LDW/O'Reilly Book: Developing on AWS with C#: A Comprehensive Guide on Using C# to Build Solutions on the AWS Platformhttps://www.amazon.com/Developing-AWS-Comprehensive-Solutions-Platform/dp/1492095877Pragmatic AI: An Introduction to Cloud-based Machine Learning: https://www.amazon.com/gp/product/B07FB8F8QP/Pragmatic AI Labs Book: Python Command-Line Tools: https://www.amazon.com/gp/product/B0855FSFYZPragmatic AI Labs Book: Cloud Computing for Data Analysis: https://www.amazon.com/gp/product/B0992BN7W8Pragmatic AI Book: Minimal Python: https://www.amazon.com/gp/product/B0855NSRR7Pragmatic AI Book: Testing in Python: https://www.amazon.com/gp/product/B0855NSRR7Subscribe to Pragmatic AI Labs YouTube Channel: https://www.youtube.com/channel/UCNDfiL0D1LUeKWAkRE1xO5QSubscribe to 52 Weeks of AWS Podcast: https://52-weeks-of-cloud.simplecast.comView content on noahgift.com: https://noahgift.com/View content on Pragmatic AI Labs Website: https://paiml.com/
If you're building your own #VoiceFirst app outside the assistants, you'll also need to think about how you get input from people. Fortunately, there are a number of tools available from AWS and Google Cloud (and others) that will help you do this. Mark and Allen go over the raw technologies involved in Automatic Speech Recognition (ASR) and Natural Language Understanding / Processing (NLU / NLP), how they work (broadly speaking), and some thoughts on what needs to be done for the future. Resources mentioned: Google Cloud Speech-to-Text - https://cloud.google.com/speech-to-text Google Dialogflow - https://cloud.google.com/dialogflow Amazon Lex - https://aws.amazon.com/lex/ Jovo Keyword NLU plugin - https://www.jovo.tech/marketplace/plugin-keywordnlu
Not every assistant needs to be part of Amazon Alexa or the Google Assistant. What if you're developing your own voice assistant? How do you take care of some tasks like getting output to your users? In this episode, Allen and Mark give an overview of some of the technologies available to you to send audio exactly the way you want it to sound and some of the tools that are available to use. Resources mentioned: Speech Synthesis Markup Language (SSML) specification - https://www.w3.org/TR/speech-synthesis11/ Amazon Lex - https://aws.amazon.com/polly/ Google Cloud Text to Speech - https://cloud.google.com/text-to-speech SSML Guru - ssml.guru Speech Markdown - SpeechMarkdown.org Jovo Marketplace TTS - https://www.jovo.tech/marketplace#tts
I have been in the "zone" working on my sabbatical project but I took time out today to create a podcast of updates. I share the tools I have been using to build the virtual character and some other fun information. Enjoy the show and feel free to share jokes that I can add to the conversation flow of the character. Article mentioned in the podcast: 40 Small Talk Questions Your Chatbot Needs to Know (And Why It Matters): https://medium.com/twyla-ai/40-small-talk-questions-your-chatboamat-needs-to-know-and-why-it-matters-63caf03347f6 Amazon Lex: https://aws.amazon.com/lex/ Cocohub: https://about.cocohub.ai Save the Cat Software - The Language of Storytelling: https://savethecat.com Book - What Would Dolly Do? How to be a Diamond in a Rhinestone World by Lauren Marino: https://www.amazon.com/What-Would-Dolly-Do-Rhinestone/dp/1538713004/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1644601332&sr=8-3 One of the Amazon Wigs I ordered: https://www.amazon.com/dp/B09M83KJPF?psc=1&ref=ppx_yo2_dt_b_product_details Go interact with a chatbot and let me know if you have any funny jokes I should add to my virtual character. --- Send in a voice message: https://anchor.fm/innovative-teaching/message
Featuring Bratton Riley, CEO of CitibotConnect w/ Bratton: https://www.linkedin.com/in/bratton-riley/ Sponsor: Nagarro Public SectorNagarro excels at helping senior technology leaders in digital disruption from Cloud, AI, Big Data, and digital product engineering to system integration work across platforms. Check out https://www.nagarro.com/enSummaryBratton Riley, CEO of Citibot, discusses using #artificialintelligence and #machinelearningtechnology to provide customer service and data management solutions for cities, governments, and corporate clients. In this episode, Riley talks about his journey from mayor's kid to working with mayors, how to join the innovation equation with technology government leaders, and how TensorIoT and Amazon Lex power Citibot's tech stack. He also provides 2-3 tips around citizen engagement.Timestamps:0:00 Intro 1:19 What is Citibot? 3:08 The importance of building relationships of trust between residents and government 5:22 Customer Example: The City of New Orleans 8:09 Automating conversations between cities and citizens 11:55 Tips for helping governments get ahead of issues 14:22 OutroSign up for Behind the Mic by TechTables:⭐️ New Podcast Episodes Every Tuesday & Thursday are delivered right to your inbox.⭐️ Every Saturday morning, you'll get 3 Interesting Learnings: from Mission-Driven Leadership to Cloud, AI to Cybersecurity, Workforce Challenges, and more- never miss insights from peers and vendor partners across the public sector.➔ Subscribe here:https://www.techtables.com/Freebie: Sign up for the 5-Day Mini Series "Behind the Mic: Insights From My First 100 CIO & Tech Leader Interviews." Over the next five days, I'll reach out to drop insights and lessons from podcasts you might have missed across the public sector.➔ Sign up here:Follow TechTables:
Ohne Handy geht heute gar nichts. Bald wird es durch die AR-Brille abgelöst und per Super-App steuern wir unser Leben. Natürliche Sprache wird wieder normal.
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Jonathan Heiliger, co-founder and partner at Vertex Ventures: an early-stage venture capital firm backing innovative technology entrepreneurs. Earlier in his career, at just 19, Jonathan co-founded web hosting provider GlobalCenter and served as CTO. He went on to hold engineering roles at Walmart and Danger, Inc., the latter of which was acquired by Microsoft. He was also Vice President of Infrastructure and Operations at Facebook (now Meta), and a general partner at North Bridge Ventures. The latter firm's portfolio included Quora, Periscope, and Lytro (which has been acquired by Google.) At Vertex Ventures, Jonathan has helped cutting-edge companies like LaunchDarkly and OpsLevel revolutionize the tech space with continuous delivery and IT service management solutions. Jonathan shares his insights into the shifting market of IT services and explains why decentralizing infrastructure management can help digitally native companies operate at a faster pace. According to Jonathan, the question of IT service infrastructure isn't being adequately addressed. Without properly defining service ownership, businesses looking to scale run the risk of siloing critical knowledge, and losing track of services networks. Jonathan also discusses his own experiences running infrastructure at Facebook (oops, Meta), the merits of both centralized and decentralized IT services management, and how he and his partners at Vertex Ventures approach new investments. Featured Guest
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Jonathan Heiliger, co-founder and partner at Vertex Ventures: an early-stage venture capital firm backing innovative technology entrepreneurs. Earlier in his career, at just 19, Jonathan co-founded web hosting provider GlobalCenter and served as CTO. He went on to hold engineering roles at Walmart and Danger, Inc., the latter of which was acquired by Microsoft. He was also Vice President of Infrastructure and Operations at Facebook (now Meta), and a general partner at North Bridge Ventures. The latter firm's portfolio included Quora, Periscope, and Lytro (which has been acquired by Google.) At Vertex Ventures, Jonathan has helped cutting-edge companies like LaunchDarkly and OpsLevel revolutionize the tech space with continuous delivery and IT service management solutions. Jonathan shares his insights into the shifting market of IT services and explains why decentralizing infrastructure management can help digitally native companies operate at a faster pace. According to Jonathan, the question of IT service infrastructure isn't being adequately addressed. Without properly defining service ownership, businesses looking to scale run the risk of siloing critical knowledge, and losing track of services networks. Jonathan also discusses his own experiences running infrastructure at Facebook (oops, Meta), the merits of both centralized and decentralized IT services management, and how he and his partners at Vertex Ventures approach new investments. Featured Guest
Voice to text nlp --- Send in a voice message: https://anchor.fm/david-nishimoto/message
相信大家的日常生活中,多少有使用到聊天機器人的經驗。像是透過 Facebook Messenger 訂餐或訂房;又或者網路銀行出問題時,透過銀行的聊天機器人來尋求協助。多數時候,系統都能給予我們簡單的指引,幫我們獲得解答;但也有些時候會讓我們感受到系統答非所問。這些自動回覆的背後邏輯,是怎麼去定義的呢?如果是用不同平台來建置聊天機器人系統,回答的方式會有什麼差別呢? 一起來聽看看 Tina 與 Shadow 講師,分享 Amazon Lex 與 Azure QnA Maker 的差異吧! 我有話要說:想聽什麼或建議,都可以偷偷跟我們說喔 Facebook|Instagram|Spotify|Apple Podcast |Google Podcast |KKBOX Podcast
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、金曜日担当パーソナリティの菅谷です。 今日は 06/17 に出たアップデートをピックアップしてご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ トークスクリプト 【AWSアプデ 06/17】AWS KMS で Multi-Region Keys をサポート 他6件【#毎日AWS #220 】 ■ UPDATE PICKUP AWS KMS で Multi-Region Keys をサポート AWS EC2 F1 インスタンスが新しいシェル F1.S.1.0 を発表 Amazon RDS for PostgreSQL が Extension Allowlists 機能をサポート AWS Amplify CLI で permissions boundaries を付与できるように AWS Copilot v1.8 が登場 Amazon Personalize が 非構造化テキストの情報を解析し、ユーザへのレコメンドに活用できるように Amazon Lex が multi-valued slots 機能をサポートし、一つのスロットにリストを入れられるように ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS」 おはようございます、火曜日担当の古川です。 今日は 6/11 に出たアップデートをピックアップしてご紹介 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ トークスクリプト https://blog.serverworks.co.jp/aws-update-2021-06-11 ■ UPDATE PICKUP 1. Amazon Lexがインテントとスロットタイプの制限をアップデート 2. Amazon EC2で新しいAMIプロパティを追加 3. AWS Managed Microsoft ADとAD ConnectorがAWS Transfer FamilyでAD認証をサポート 4. Amazon SageMaker Feature Storesで、BatchGetRecord APIを使用して、複数のレコードの同時読み取り可能に 5. AWS Elemental MediaConnectで入力ソースの選択と優先順位のサポートを追加 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
As much as Amazon and Google dominate the voice AI headlines, one size doesn't always fit all. There's a trend within the voice industry of brands creating their own voice assistants. The BBC have created Beeb, Mercedes have My Mercedes, Capital One; Eno and plenty of others are following.And even those who aren't creating their own high-level digital assistant to serve as the AI front-end to the companies products and services, many are wanting narrow implementations of voice AI across apps, IVR, the web and other channels where Alexa and Google .While you can use Amazon Lex and Google DialogFlow for these instances, some companies don't want their data to be held with those companies. Some want more control or even need more control over the language model.This is where Voxta plays: enabling the capability for brands to implement their own assistants. Co-Founders Kavita Reddi and Sirish Reddi join us to share how they do it.We chat about the approach to creating custom voice assistants, the value of Voxta, the differences between voice AI in India and Europe and plenty more.SPONSORED BY THE CONVERSATION DESIGN INSTITUTECompanies around the world are looking for conversation designers, a rare breed of people to help them advance communication between people and AI. That's where we help!Conversation Design Institute leads in training and certification for Conversation Designers, Conversational Copywriters, and AI Trainers. Our human-centric workflow has proven itself around the world. Our certificates ensure you create winning conversational experiences.As advocates for the Conversation Design profession, we continue to work closely with tech partners, agencies and other key players in the industry. With this support, Conversation Design Institute is well on its way to becoming the number one platform in conversation design.Save 25% with code CDI972VUX25OFFFIND OUT MORELinksVoxta.comVoxta on TwitterConnect with Kavita on LinkedIn See acast.com/privacy for privacy and opt-out information.
In another slightly delayed episode Arjen, JM, and Guy talk about all the many things that were announced in October. But before that, they will first discuss exactly how badly Lex understands "a fair shake of the sauce bottle". Talk to us in our Slack or on Twitter! The News Finally in Sydney Amazon Connect supports Amazon Lex bots using the Australian English dialect Amazon EC2 G4dn Bare Metal Instances with NVIDIA T4 Tensor Core GPUs, now available in 15 additional regions AWS IoT SiteWise is now available in Asia Pacific (Singapore) and Asia Pacific (Sydney) AWS regions Amazon Relational Database Service (RDS) Snapshot Export to S3 available in additional regions Serverless Introducing AWS Lambda Extensions – In preview | AWS Compute Blog Announcing Amazon CloudWatch Lambda Insights (preview) New – Use AWS PrivateLink to Access AWS Lambda Over Private AWS Network | AWS News Blog Amazon EventBridge announces support for Dead Letter Queues AWS Step Functions now supports Amazon Athena service integration Amazon API Gateway now supports disabling the default REST API endpoint Containers Amazon EKS now supports Kubernetes version 1.18 Amazon EKS now supports the Los Angeles AWS Local Zones Amazon EKS now supports configurable Kubernetes service IP address range Amazon ECS extensions for AWS Cloud Development Kit now available as a Developer Preview AWS Elastic Beanstalk Adds Support for Running Multi-Container Applications on AL2 based Docker Platform Fluent Bit supports Amazon S3 as a destination to route container logs AWS App Mesh supports cross account sharing of ACM Private Certificate Authority Introducing the AWS Load Balancer Controller AWS Copilot CLI launches v0.5 to let users deploy scheduled jobs and more EC2 & VPC AWS Nitro Enclaves – Isolated EC2 Environments to Process Confidential Data | AWS News Blog Announcing SSL/TLS certificates for Amazon EC2 instances with AWS Certificate Manager (ACM) for Nitro Enclaves New – Application Load Balancer Support for End-to-End HTTP/2 and gRPC | AWS News Blog AWS Compute Optimizer enhances EC2 instance type recommendations with Amazon EBS metrics AWS Cloud Map simplifies service discovery with optional parameters AWS Global Accelerator launches port overrides AWS IoT SiteWise launches support for VPC private links AWS Site-to-Site VPN now supports health notifications Dev & Ops AWS CloudFormation now supports increased limits on five service quotas AWS CloudFormation Guard – an open-source CLI for infrastructure compliance – is now generally available AWS CloudFormation Drift Detection now supports CloudFormation Registry resource types Amazon CloudWatch Synthetics now supports prebuilt canary monitoring dashboard Amazon CloudWatch Synthetics launches Recorder to generate user flow scripts for canaries AWS and Grafana Labs launch AWS X-Ray data source plugin Now author AWS Systems Manager Automation runbooks using Visual Studio Code AWS Systems Manager now supports free-text search of runbooks AWS Systems Manager now allows filtering automation executions by applications or environments Now use AWS Systems Manager to view vulnerability identifiers for missing patches on your Linux instances Port forwarding sessions created using Session Manager now support multiple simultaneous connections Now customize your Session Manager shell environment with configurable shell profiles AWS End of Support Migration Program for Windows Server now available as a self-serve solution for customers EC2 Image Builder now supports AMI distribution across AWS accounts Announcing general availability of waiters in the AWS SDK for Java 2.x Porting Assistant for .NET is now open source Amazon Corretto 8u272, 11.0.9, 15.0.1 quarterly updates are now available Security AWS Config adds 15 new sample conformance pack templates and introduces simplified setup experience for conformance packs AWS IAM Access Analyzer now supports archive rules for existing findings AWS AppSync adds support for AWS WAF AWS Shield now provides global and per-account event summaries to all AWS customers Amazon CloudWatch Logs now supports two subscription filters per log group Amazon S3 Object Ownership is available to enable bucket owners to automatically assume ownership of objects uploaded to their buckets Protect Your AWS Compute Optimizer Recommendation Data with customer master keys (CMKs) Stored in AWS Key Management Service Manage access to AWS centrally for Ping Identity users with AWS Single Sign-On Amazon Elasticsearch Service adds native SAML Authentication for Kibana Amazon Inspector has expanded operating system support for Red Hat Enterprise Linux (RHEL) 8, Ubuntu 20.04 LTS, Debian 10, and Windows Server 2019 Data Storage & Processing New – Amazon RDS on Graviton2 Processors | AWS News Blog Amazon ElastiCache now supports M6g and R6g Graviton2-based instances Easily restore an Amazon RDS for MySQL database from your MySQL 8.0 backup Amazon RDS for PostgreSQL supports concurrent major version upgrades of read replicas Amazon Aurora enables dynamic resizing for database storage space AWS Lake Formation now supports Active Directory and SAML providers for Amazon Athena AWS Lake Formation now supports cross account database sharing Now generally available – design and visualize Amazon Keyspaces data models more easily by using NoSQL Workbench You now can manage access to Amazon Keyspaces by using temporary security credentials for the Python, Go, and Node.js Cassandra drivers Amazon ElastiCache on Outposts is now available Amazon EMR now supports placing your EMR master nodes in distinct racks to reduce risk of simultaneous failure Amazon EMR integration with AWS Lake Formation is now generally available Amazon EMR now provides up to 35% lower cost and up to 15% improved performance for Spark workloads on Graviton2-based instances AWS Glue Streaming ETL jobs support schema detection and evolution AWS Glue supports reading from self-managed Apache Kafka AWS Glue crawlers now support Amazon DocumentDB (with MongoDB compatibility) and MongoDB collections Amazon Kinesis Data Analytics now supports Force Stop and a new Autoscaling status Kinesis Client Library now enables multi-stream processing Announcing cross-database queries for Amazon Redshift (preview) Amazon Redshift announces support for Lambda UDFs and enables tokenization New Amazon Neptune engine release now enforces a minimum version of TLS 1.2 and SSL client connections AWS Database Migration Service now supports Amazon DocumentDB (with MongoDB compatibility) as a source AI & ML Amazon SageMaker Autopilot now Creates Machine Learning Models up to 40% Faster with up to 200% Higher Accuracy Now launch Amazon SageMaker Studio in your Amazon Virtual Private Cloud (VPC) Amazon SageMaker Price Reductions – Up to 18% for ml.P3 and ml.P2 instances Amazon SageMaker Studio Notebooks now support custom images Amazon Rekognition adds support for six new content moderation categories Amazon Rekognition now detects Personal Protective Equipment (PPE) such as face covers, head covers, and hand covers on persons in images Amazon Transcribe announces support for AWS PrivateLink for Batch APIs Amazon Kendra now supports custom data sources Amazon Kendra adds Confluence Server connector Amazon Textract announces improvements to reduce average API processing times by up to 20% Other Cool Stuff AWS DeepRacer announces new Community Races updates Amazon WorkSpaces introduces sharing images across accounts AWS Batch now supports Custom Logging Configurations, Swap Space, and Shared Memory Amazon Connect supports Amazon Lex bots using the British English dialect Amazon Connect chat now provides automation and personalization capabilities with whisper flows CloudWatch Application Insights offers new, improved user interface CloudWatch Application Insights adds EBS volume and API Gateway metrics Announcing AWS Budgets price reduction Announcing AWS Budgets Actions Resource Access Manager Support is now available on AWS Outposts Announcing Amazon CloudFront Origin Shield Announcing AWS Distro for OpenTelemetry in Preview Introducing Amazon SNS FIFO – First-In-First-Out Pub/Sub Messaging | AWS News Blog Amazon SNS now supports selecting the origination number when sending SMS messages Amazon SES now offers list and subscription management capabilities Nano candidates Amazon WorkDocs now supports Dark Mode on iOS Amazon Corretto 8u272, 11.0.9, 15.0.1 quarterly updates are now available AWS OpsWorks for Configuration Management now supports new version of Chef Automate Sponsors Gold Sponsor Innablr Silver Sponsors AC3 CMD Solutions DoIT International
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 9/15 に出たアップデート8件をご紹介。 ※今回アップデートが多く2回に分けての放送となります。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ Amplify JavaScript が Next.js や Nuxt.js などサーバーサイドレンダリングフレームワークに対応 Amazon Lex がイギリス英語に対応 Amazon CloudFront が Brotli 圧縮をサポート Amazon Transcribe が自動言語認識機能を追加 Amazon Kinesis Data Analytics が JavaベースのApache Beam を使用したストリーミングアプリケーションをサポート Amazon Kinesis Data Analytics が Apache Flink Kinesis Data Firehose Producer v2.0.0 に対応 AWS上で .NET アプリケーションを展開する、新しいデジタルコースが登場 CloudFormation が Amazon Kendra をサポート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 8/6 に出たアップデート3件をご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ AWS Wavelength がボストンおよびサンフランシスコ・ベイエリアで一般利用可能に Amazon Transcribe がカスタム言語モデルを発表 Amazon Lex が精度の向上と信頼スコアを発表 ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
最新情報を "ながら" でキャッチアップ! ラジオ感覚放送 「毎日AWS!」 おはようございます、サーバーワークスの加藤です。 今日は 6/30 に出た 11件のアップデートをご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! ■ UPDATE ラインナップ Amazon RDS Proxy が一般利用可能に AWS CodeDeploy エージェントの自動インストールとスケジュールアップデートが可能に Amazon CloudWatch でAWS CodeBuildのリソース使用率メトリクスを サポート Amazon EFS がファイルシステムの最小スループットを向上 Amazon QuickSight が Lake Formation で保護された Athena データソースをサポート開始 Amazon Connect でエージェントの通話切断後にフローを追加できるように AWS SDK for C++ Version 1.8 が一般利用可能に Amazon QuickSight がヒストグラム機能、クロスリージョンAPI を提供開始 Amazon DocumentDB がt3.medium をサポート Amazon Chime SDK がモバイルブラウザからの音声通話・ビデオ通話をサポート Amazon Lex がアジア東京リージョンで利用可能に AWS Systems Manager のパッチマネージャー機能がLinuxプラットフォームの新しいバージョンをサポート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
In this episode, I go through our latest announcements on Amazon Polly, Amazon Rekognition, Amazon Lex, Amazon Personalize, and Amazon SageMaker Ground Truth. I demo how to use a Lex chatbot for search queries on Kendra.⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️Rekognition blog post: https://aws.amazon.com/blogs/media/streamline-media-analysis-tasks-with-amazon-rekognition-video/Kendra blog post: https://aws.amazon.com/blogs/aws/reinventing-enterprise-search-amazon-kendra-is-now-generally-available/SageMaker Ground Truth blog post: https://aws.amazon.com/blogs/aws/new-label-3d-point-clouds-with-amazon-sagemaker-ground-truth/For more content:* AWS blog: https://aws.amazon.com/blogs/aws/auth...* Medium blog: https://medium.com/@julsimon * YouTube: https://youtube.com/juliensimonfr * Podcast: http://julsimon.buzzsprout.com * Twitter https://twitter.com/@julsimon
YouTube にて先行して配信を始めていた、最新情報を "ながらで" キャッチアップ!ラジオ感覚放送「毎日AWS」 7月より Podcast での配信も開始します! (※本エピソードは Podcast 配信前に YouTubeで上げたモノになります。) おはようございます、サーバーワークスの加藤です! 今日は 6/17 に出た 8 つのアップデートをご紹介。 感想は Twitter にて「#サバワ」をつけて投稿してください! やや噛みがちなのでスムーズにお伝えできるよう頑張ります。。。! ■ ラインナップ Amazon RDS for PostgreSQL が新しいマイナバージョンをサポート Amazon Corretto for Alpine Linux がプレビューに Amazon Lex と Amazon Kendra の統合を可能にする、ビルトインサーチインテントをアナウンス Coursera で Amazon DynamoDB: Building NoSQL Database-Driven Applications コースの提供を開始 Amazon API Gateway WebSocket API でサブプロトコルを許可するように Amazon RDS for Oracleが、Oracle自動診断リポジトリコマンドインタープリター(ADRCI)ユーティリティによる診断データの管理をサポート エッジコンピューティングおよびデータ転送デバイスである AWS Snowcone を新たに発表 AWS DataSync が AWS Snowcone のデータ転送をサポート ■ サーバーワークスSNS Twitter / Facebook ■ サーバーワークスブログ サーバーワークスエンジニアブログ
AWS Morning Brief for the week of January 6th, 2020.
In this episode, I talk about new features on Amazon Lex (chatbots), Amazon Textract (text extraction), Amazon Personalize (recommendation & personalization), and Amazon Transcribe (speech to text). I also demo profanity filtering with Transcribe (run for cover!).⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️Additional resources mentioned in the podcast:* Amazon SageMaker Ground Truth demo: https://www.youtube.com/watch?v=oEcH8amMcT8* Amazon Textract demo: https://youtu.be/8kobcRynTTA* Introduction to Graph Convolutional Networks: https://youtu.be/2bfxnj1J00AThis podcast is also available in video: https://www.youtube.com/watch?v=BretMPkpipgFor more content, follow me at https://medium.com/@julsimon and at https://twitter.com/@julsimon
Amazon Connect now gives businesses a single unified contact center for voice and chat. Come see how you can create an Amazon Connect contact center instance, and learn how to use a single contact flow to design a natural language chatbot experience using Amazon Lex that escalates to a human agent on voice and chat. We demo unified routing, queueing, chatbot design, agent experience, and concurrency for voice and chat.
Amazon Lex creates conversational interfaces powered by the same deep learning technologies used in Alexa. AWS Batch dynamically provisions the optimal quantity and type of compute resources based on the volume and specific resource requirements of the batch jobs submitted. In this chalk talk, learn how Amazon Lex uses Amazon ECS to dynamically run these batch jobs to create conversational bots.
The New York Times was looking to radically transform its computer telephony integration (CTI) system supporting its customer-care operations. After evaluating other solutions, the company chose Amazon Connect because of its ease of migration, flexibility, and customer-centric capabilities. Learn how The New York Times successfully rewrote its IVR and call workflows using Amazon Lex for natural language detection, and how it is using Amazon Transcribe to stream call data to a datastore and is writing Amazon Lex bots. Hear how the company was able to leverage cutting-edge technology, at scale, while reducing operational costs.
https://aws.amazon.com/lex/ See acast.com/privacy for privacy and opt-out information.
How can Financial Services companies meet growing customer demands for personalized, high-quality service while satisfying regulatory obligations? In this session, learn how Amazon Connect, a self-service, cloud-based contact center, can integrate with machine learning services on AWS such as Amazon Transcribe, Amazon Comprehend, and Amazon Lex to enable financial institutions to deliver transformational omni-channel experiences to their customers while complying with regulations like MiFID II, the GDPR, and the SEC's data retention rules. Learn how to use Amazon Connect to easily set up a cloud-based contact center solution that scales to support businesses of any size. Then learn how to integrate Amazon Connect with machine learning services on AWS to make contact center content available for search and analysis by natural language processing tools, which can yield valuable insights into customer sentiment, customer preferences, and the most common issues customers raise during service interactions. Complete Title: AWS re:Invent 2018: Financial Svcs: Build Customer-Centric Contact Centers with Amazon Connect & Machine Learning (FSV301)
With Amazon Connect, a cloud-based contact center service, businesses can create dynamic contact flows that provide personalized caller experiences by taking history and past context into consideration to anticipate callers' needs. Join us to learn how customers are executing successful strategies using Amazon Lex to add NLU chatbots into their Amazon Connect customer experience workflows. Learn how using Amazon Lex, an AI service that enables you to create intelligent conversational chatbots that can turn your contact flows into natural conversations using the same technology that powers Amazon Alexa. Learn how to automate repeatable routine tasks such as password resets, order status, and balance inquiries without the need for an agent.
Late in 2017, Mutual of Omaha began a cloud journey to modernize its legacy contact centers. Using Amazon Connect-supported by Amazon Lex, Amazon Polly, AWS Lambda, and Kibana, Accenture helped Mutual of Omaha improve customer engagement, developed self-service features using leading-edge speech recognition, and developed powerful analytics to continuously drive positive change. Mutual of Omaha plans to reduce TCO annually with Amazon Connect compared with its legacy solution. As of August 2018, three contact centers are live in Amazon Connect, with several more scheduled to go live in 2018. This session is brought to you by AWS partner, Accenture.
In this talk, we review the challenges of adding a virtual character to AR/VR applications and highlight how Amazon Sumerian solves these challenges. We discuss leading use cases and demonstrate how customers are creating dynamic, interactive virtual concierges using Sumerian hosts integrated with various AWS technologies, such as Amazon Polly, Amazon Lex, Amazon Rekognition, and AWS Lambda.
Anyone can create and publish augmented reality (AR), virtual reality (VR), and 3D applications quickly and easily with Amazon Sumerian. In this session, learn how to use Sumerian to build a scene that can be published and viewed on laptops, mobile phones, VR headsets, and digital signage. Take a tour of the Sumerian interface, and learn how to build a scene, add assets and hosts, and add behaviors to create dynamically animated objects and characters in an AR/VR experience. Also see how Sumerian integrates into AWS services such as Amazon Polly, Amazon Lex, AWS Lambda, Amazon S3, and Amazon DynamoDB.
Analyzing customer service interactions across channels provides a complete 360-degree view of customers. By capturing all interactions, you can better identify the root cause of issues and improve first-call resolution and customer satisfaction. In this session, learn how to integrate Amazon Connect and AWS machine learning services, such Amazon Lex, Amazon Transcribe, and Amazon Comprehend, to quickly process and analyze thousands of customer conversations and gain valuable insights. With speech and text analytics, you can pick up on emerging service-related trends before they get escalated or identify and address a potential widespread problem at its inception.
In this demo, learn how Anki used the Amazon Lex chatbot capability to build interactive games that help students learn better.
You've designed and built a well-architected data lake and ingested extreme amounts of structured and unstructured data. Now what? In this session, we explore real-world use cases where data scientists, developers, and researchers have discovered new and valuable ways to extract business insights using advanced analytics and machine learning. We review Amazon S3, Amazon Glacier, and Amazon EFS, the foundation for the analytics clusters and data engines. We also explore analytics tools and databases, including Amazon Redshift, Amazon Athena, Amazon EMR, Amazon QuickSight, Amazon Kinesis, Amazon RDS, and Amazon Aurora; and we review the AWS machine learning portfolio and AI services such as Amazon SageMaker, AWS Deep Learning AMIs, Amazon Rekognition, and Amazon Lex. We discuss how all of these pieces fit together to build intelligent applications.
Building Your Own Q & A Bot Do your customers have questions they need answered 24/7? Ever wanted to build your own Q & A Bot? John Calhoun, AWS Solutions Architect, joins Simon to show you how! Shownotes: Q & A Bot Building Blog: https://aws.amazon.com/blogs/machine-learning/creating-a-question-and-answer-bot-with-amazon-lex-and-amazon-alexa/ Amazon Lex: https://aws.amazon.com/lex/ Amazon Elasticsearch Service: https://aws.amazon.com/elasticsearch-service/
It is update time! Simon shares a great selection of new things for customers - what will be your favourite? Shownotes: Amazon Polly Gives WordPress a Voice! - AWS Machine Learning Blog | https://aws.amazon.com/blogs/machine-learning/amazon-polly-gives-wordpress-a-voice/ Amazon Polly New Phonation Tag Enables You to Create Softer Speech | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-polly-new-phonation-tag-enables-you-to-create-softer-speech/ Amazon Connect Adds Speech Synthesis Markup Language Support for Amazon Lex Chatbots | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-connect-adds-speech-synthesis-markup-language-support-for-amazon-lex-chatbots/ Announcing Responses Capability in Amazon Lex and SSML Support in Text Response | https://aws.amazon.com/about-aws/whats-new/2018/02/announcing-responses-capability-in-amazon-lex-and-ssml-support-in-text-response/ Now Export and Import your Amazon Lex Chatbot Schema | https://aws.amazon.com/about-aws/whats-new/2018/02/now-export-and-import-your-amazon-lex-chatbot-schema/ Amazon DynamoDB Now Supports Server-Side Encryption at Rest | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-dynamodb-now-supports-server-side-encryption-at-rest/ Amazon DynamoDB Accelerator (DAX) Releases SDKs for Python and .NET, Support for T2 Instances, and now available in the Asia Pacific (Singapore) and Asia Pacific (Sydney) Regions | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-dynamodb-accelerator-dax-releases-sdks-for-python-and-dot-net-support-for-t2-instances-and-now-available-in-the-asia-pacific-singapore-and-asia-pacific-sydney-regions/ Amazon Cognito Simplifies User Migration | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-cognito-simplifies-user-migration/ Amazon ECS Adds New Endpoint to Access Task Metrics and Metadata | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-ecs-adds-new-endpoint-to-access-task-metrics-and-metadata/ AWS Fargate Supports Container Workloads Regulated By ISO, PCI, SOC, and HIPAA | https://aws.amazon.com/about-aws/whats-new/2018/03/aws-fargate-supports-container-workloads-regulated-by-iso-pci-soc-and-hipaa/ Target Tracking Available for Container Service Auto Scaling in Amazon ECS Console | https://aws.amazon.com/about-aws/whats-new/2018/02/target-tracking-available-for-container-service-auto-scaling-in-amazon-ecs-console/ AWS Shield now Integrated with AWS CloudTrail | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-shield-now-integrated-with-aws-cloudtrail/ Amazon GameLift Introduces Backfill Functionality to FlexMatch, the Dynamic Matchmaking Service for Multiplayer Experiences | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-gamelift-introduces-backfill-functionality-to-flexmatch-the-dynamic-matchmaking-service-for-multiplayer-experiences/ Amazon GameLift FleetIQ and Spot Instances Reduce Costs by up to 90% | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-gamelift-fleetiq-and-spot-instances-reduce-costs-by-up-to-90-percent/ New AWS Direct Connect sites land in Paris and Taipei | https://aws.amazon.com/about-aws/whats-new/2018/02/new-aws-direct-connect-sites-land-in-paris-and-taipei/ Inter-Region VPC Peering is Now Available in Nine Additional AWS Regions | https://aws.amazon.com/about-aws/whats-new/2018/02/inter-region-vpc-peering-is-now-available-in-nine-additional-aws-regions/ Longer Format Resource IDs are Now Available in Amazon EC2 | https://aws.amazon.com/about-aws/whats-new/2018/02/longer-format-resource-ids-are-now-available-in-amazon-ec2/ AWS AppSync Adds new GraphQL Functionality and Removes Whitelist Approvals from Preview | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-appsync-adds-new-graphql-functionality-and-removes-whitelist-approvals-from-preview/ AWS AppSync Expands to Three New Regions, Adds API Key Extension Feature | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-appsync-expands-to-three-new-regions-adds-api-key-extension-feature/ AWS Config Adds Support for AWS WAF RuleGroups | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-config-adds-support-for-aws-waf-rulegroups/ New Products for Managed Rules on AWS WAF | https://aws.amazon.com/about-aws/whats-new/2018/02/new-products-for-managed-rules-on-aws-waf/ Amazon Inspector Now Supports Windows Server 2016 | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-inspector-now-supports-windows-server-2016/ AWS Trusted Advisor's S3 Bucket Permissions Check Is Now Free | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-trusted-advisors-s3-bucket-permissions-check-is-now-free/ Amazon EC2 Auto Scaling Adds Support for Service-Linked Roles | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-ec2-auto-scaling-adds-support-for-service-linked-roles/ Network Load Balancer now Supports Cross-Zone Load Balancing | https://aws.amazon.com/about-aws/whats-new/2018/02/network-load-balancer-now-supports-cross-zone-load-balancing/ Auto Scaling in Amazon SageMaker is now Available | https://aws.amazon.com/about-aws/whats-new/2018/02/auto-scaling-in-amazon-sagemaker-is-now-available/ AWS DeepLens Announces the Ability to Directly Import Models from Amazon SageMaker | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-deeplens-announces-the-ability-to-directly-import-models-from-amazon-sagemaker/ Introducing the Real-Time Insights on AWS Account Activity | https://aws.amazon.com/about-aws/whats-new/2018/02/introducing-the-real-time-insights-on-aws-account-activity/ AWS Serverless Application Repository Now Generally Available | https://aws.amazon.com/about-aws/whats-new/2018/02/aws-serverless-application-repository-now-generally-available/ Amazon AppStream 2.0 Now Supports Copying Images Across AWS Regions | https://aws.amazon.com/about-aws/whats-new/2018/02/amazon-appstream-2_0-now-supports-copying-images-across-aws-regions/ Amazon CloudWatch Events now Supports AWS Batch as an Event Target | https://aws.amazon.com/about-aws/whats-new/2018/03/amazon-cloudwatch-events-now-supports-aws-batch-as-an-event-target/ AWS Service Catalog Announces AutoTags for Automatic Tagging of Provisioned Resources | https://aws.amazon.com/about-aws/whats-new/2018/03/aws-service-catalog-announces-autotags-for-automatic-tagging-of-provisioned-resources/ AWS Service Catalog Launches Brand Your Console to Deliver a Customizable User Experience | https://aws.amazon.com/about-aws/whats-new/2018/03/aws-service-catalog-launches-brand-your-console-to-deliver-a-customizable-user-experience/ AWS Storage Gateway Expands Automation with New CloudWatch Event, and Support for "Requester Pays" Buckets | https://aws.amazon.com/about-aws/whats-new/2018/03/aws-storage-gateway-expands-automation-with-new-cloudwatch-event-and-support-for-requester-pays-buckets/ Amazon Redshift Spectrum Now Supports Scalar JSON and Ion Data Types | https://aws.amazon.com/about-aws/whats-new/2018/03/amazon-redshift-spectrum-now-supports-scalar-json-and-ion-data-types/ PostgreSQL 10 now Supported in Amazon RDS | https://aws.amazon.com/about-aws/whats-new/2018/02/postgresql-10-now-supported-in-amazon-rds/ AWS GovCloud (US) Region Adds Third Availability Zone | https://aws.amazon.com/about-aws/whats-new/2018/03/aws-govcloud-us-region-adds-third-availability-zone/ AWS Snowball Now Available in AWS Singapore Region | https://aws.amazon.com/about-aws/whats-new/2018/03/aws-snowball-now-available-in-aws-singapore-region/
Today's retail customers expect exceptional customer service and tailored solutions to their problems. Chat and voice interfaces provide retailers with new ways to interact with their customers and to provide intelligent & efficient solutions. In this session, we will build and demonstrate a ChatBot powered by Amazon AI that can autonomously guide a customer through the process of reporting an undelivered or defective item & quickly offer appropriate solutions. Learn how to redefine your customer service experience by tying together Amazon Lex, AWS Lambda and AWS DynamoDB to easily add ChatBot functionality to your retail solution.
This talk includes a story and a recipe. The story is about a nerd who bought his first motorbike, got a license for it, and started hacking to make it interact and talk, all in two months. The recipe is a technical one that explains how to use Amazon Lex and Amazon Lambda to quickly prototype and deploy a serverless chatbot connected with an embedded device in order to realize an Internet of Things (IoT) application. We discuss how you can integrate your IoT application with Amazon Lex using AWS Lambda and the Amazon API Gateway, how to exchange session data to have a contextual conversation, and how to provide a successful bot experience. Expect to leave this session knowing how to build, deploy, and publish a bot, and how to attach it to an IoT device—with the potential to bringing to life any object that surrounds you.
Amazon Connect is a cloud-based contact center service that allows you to create dynamic contact flows and personalized caller experiences by using their history and responses to anticipate their needs. Learn how with Amazon Lex, an AI service that allows you to create intelligent conversational “chatbots,” turning your contact flows into natural conversations using the same technology behind Amazon Alexa. Routine tasks such as password resets, order status, and balance inquiries can be automated without an agent. In this session, you will hear from Asurion about their Amazon Connect contact center environment and how they enhanced the customer and agent experience with Amazon Lex.
In this session, we cover an integration of Amazon Lex with a contact center solution. We demonstrate how an Amazon Lex chatbot can be inserted into an interactive voice response (IVR) workflow in a contact center, enabling users to interact with the chatbot using natural language. We walk through a ready-to-deploy integration that includes building the bot, setting up the IVR, and managing the call routing. We also describe the best practices for selective routing based on user intent, exchange of information between the chatbot/IVR, and handover to a human agent.
Amazon Lex is a service for building conversational interfaces into any application using voice and text, and Amazon Polly is a service that turns text into lifelike speech. This session combines both of these AWS services during which the presenter will demonstrate how to build a Help Desk chatbot that feature spoken-voice interfaces. Attendees will be provided with the foundational skills for those looking to enrich their applications with natural, conversational interfaces. Liberty Mutual Insurance will also present on their chat platform architecture to demonstrated how they areusing Amazon Lex in their organization as an employee digital assistant.
In this session, you will learn best practices for implementing simple to advanced AI/ML use cases on AWS. First. we will review the decision points for using democratised services such as Amazon Lex, Amazon Polly and integration with services such as Amazon Connect. Then we will look at real use cases, optimising the customer experience with chatbots, streamlining the customer experience predicting responses with Amazon Connect. Finally, we will dive deep into the most common of these patterns and cover design and implementation considerations. By the end of the session you will understand how to use Amazon Lex to optimise the user experience, through different user interactions.
Enterprises must transform at the pace of technology. Through chatbots built with Amazon Lex, enterprises are improving business productivity, reducing execution time, and taking advantage of efficiency savings for common operational requests. These include inventory management, human resources requests, self-service analytics, and even the onboarding of new employees. In this session, learn how Infor integrated Amazon Lex into their standard technology stack, with several use cases based on advisory, assistant, and automation roles deeply rooted in their expanding AI strategy. This strategy powers one of the major functionalities of Infor Coleman to enable their users to make business decisions more quickly.
In this session, discover how to build a multichannel conversational interface that leverages a preprocessing layer in front of Amazon Lex. This preprocessing layer can enable customers to integrate their conversational interface with external services and use multiple specialized Amazon Lex chatbots as part of an overall solution. As an example of how to integrate with an external service, learn how to integrate with Skype. Watch it in action through a chatbot demonstration with interaction through Skype messaging and voice.
Simon takes a walk through LOTS of the updates that have been happening for AWS Customers. Shownotes New – Per-Second Billing for EC2 Instances and EBS Volumes - AWS Blog | https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/ Amazon Virtual Private Cloud (VPC) now allows customers to expand their existing VPCs | https://aws.amazon.com/about-aws/whats-new/2017/08/amazon-virtual-private-cloud-vpc-now-allows-customers-to-expand-their-existing-vpcs/ New – Descriptions for Security Group Rules - AWS Blog | https://aws.amazon.com/blogs/aws/new-descriptions-for-security-group-rules/ New – Stop & Resume Workloads on EC2 Spot Instances - AWS Blog | https://aws.amazon.com/blogs/aws/new-stop-resume-workloads-on-ec2-spot-instances/ Amazon VPC NAT Gateways now support Amazon CloudWatch Monitoring and Resource Tagging | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-vpc-nat-gateways-now-support-amazon-cloudwatch-monitoring-and-resource-tagging/ AWS VPN Update – Custom PSK, Inside Tunnel IP, and SDK update | https://aws.amazon.com/about-aws/whats-new/2017/10/aws-vpn-update-custom-psk-inside-tunnel-ip-and-sdk-update/ Elasticsearch 5.5 now available on Amazon Elasticsearch Service | https://aws.amazon.com/about-aws/whats-new/2017/09/elasticsearch-5_5-now-available-on-amazon-elasticsearch-service/ Amazon Route 53 Traffic Flow Announces Support For Geoproximity Routing With Traffic Biasing | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-route-53-traffic-flow-announces-support-for-geoproximity-routing-with-traffic-biasing/ New Network Load Balancer – Effortless Scaling to Millions of Requests per Second - AWS Blog | https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/ Now Available – EC2 Instances with 4 TB of Memory - AWS Blog | https://aws.amazon.com/blogs/aws/now-available-ec2-instances-with-4-tb-of-memory/ Use OpenCL Development Environment with Amazon EC2 F1 FPGA Instances to accelerate your C/C++ applications, also F1 instances are now available in US West (Oregon) and EU (Ireland) Regions | https://aws.amazon.com/about-aws/whats-new/2017/09/use-opencl-development-environment-with-amazon-ec2-f1-fpga-instances-to-accelerate-your-c-c-plus-plus-applications-also-f1-instances-are-now-available-in-us-west-oregon-and-eu-ireland-regions/ Announcing: React Native Starter Project with One-Click AWS Deployment and Serverless Infrastructure - AWS Mobile Blog | https://aws.amazon.com/blogs/mobile/announcing-react-native-starter-project-with-one-click-aws-deployment-and-serverless-infrastructure/ Announcing enhancements to the Amazon Lex test console | https://aws.amazon.com/about-aws/whats-new/2017/09/announcing-enhancements-to-the-amazon-lex-test-console/ Announcing support for synonyms and slot value validation on Amazon Lex | https://aws.amazon.com/about-aws/whats-new/2017/08/announcing-support-for-synonyms-and-slot-value-validation-on-amazon-lex/ Now Specify Request Level Attributes with Amazon Lex | https://aws.amazon.com/about-aws/whats-new/2017/09/now-specify-request-level-attributes-with-amazon-lex/ New Amazon Lex Built-in Slot Types for Phone numbers, Speed, and Weight, Available in Preview | https://aws.amazon.com/about-aws/whats-new/2017/09/new-amazon-lex-built-in-slot-types-for-phone-numbers-speed-and-weight-available-in-preview/ Export your Amazon Lex chatbot to the Alexa Skills Kit | https://aws.amazon.com/about-aws/whats-new/2017/09/export-your-amazon-lex-chatbot-to-the-alexa-skills-kit/ Apple Core ML and Keras Support Now Available for Apache MXNet - AWS AI Blog | https://aws.amazon.com/blogs/ai/apple-core-ml-and-keras-support-now-available-for-apache-mxnet/ AWS CodePipeline now provides notifications on pipeline, stage, and action status changes | https://aws.amazon.com/about-aws/whats-new/2017/09/aws-codepipeline-now-provides-notifications-on-pipeline-stage-and-action-status-changes/ Amazon Pinpoint Introduces Two-Way Text Messaging | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-pinpoint-introduces-two-way-text-messaging/ Amazon Cognito Integrates with Amazon Pinpoint to Add Analytics for User Pools and Enrich Pinpoint Campaigns | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-cognito-integrates-with-amazon-pinpoint-to-add-analytics-for-user-pools-and-enrich-pinpoint-campaigns/ Amazon Redshift now supports late-binding views referencing Amazon Redshift and Redshift Spectrum external tables | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-redshift-now-supports-late-binding-views-referencing-amazon-redshift-and-redshift-spectrum-external-tables/ Custom Artifacts on AWS Device Farm - AWS Mobile Blog | https://aws.amazon.com/blogs/mobile/custom-artifacts-on-aws-device-farm/ Amazon Aurora Can Migrate Encrypted Databases from Amazon RDS for MySQL | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-aurora-can-migrate-encrypted-databases-from-amazon-rds-for-mysql/ Amazon EC2 Systems Manager Adds Raspbian OS and Raspberry Pi Support | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-ec2-systems-manager-adds-raspbian-os-and-raspberry-pi-support/ AWS Greengrass is available in Asia Pacific (Tokyo) Region. | https://aws.amazon.com/about-aws/whats-new/2017/09/aws-greengrass-is-available-in-asia-pacific-tokyo-region/ AWS CloudTrail Enables Option to Add All Amazon S3 Buckets to Data Events | https://aws.amazon.com/about-aws/whats-new/2017/09/aws-cloudtrail-enables-option-to-add-all-amazon-s3-buckets-to-data-events/ Amazon Kinesis Analytics improves application performance for high volume data streams | https://aws.amazon.com/about-aws/whats-new/2017/09/amazon-kinesis-analytics-improves-application-performance-for-high-volume-data-streams/ New Kinesis Analytics stream processing functions for time series analytics, real time sessionization, and more | https://aws.amazon.com/about-aws/whats-new/2017/09/new-kinesis-analytics-stream-processing-functions-for-time-series-analytics-real-time-sessionization-and-more/ AWS CloudFormation provides Stack Termination Protection | https://aws.amazon.com/about-aws/whats-new/2017/09/aws-cloudformation-provides-stack-termination-protection/
Join host Oli in the latest episode of AWS TechChat as he speaks with special guest, Dean Samuels, Solutions Architect Manager, HKT, AWS. Oli and Dean dive into the latest updates and announcements around Network Load Balancer, Amazon Lex, Amazon EC2 Systems Manager, AWS Greengrass, Per-Second billing, Middle East Region, Apache MXNet, and Prime Day 2017.
Join host Dr Pete in the latest episode of AWS TechChat, as he shares the information and updates around VMware on AWS, improvements to signing into your AWS Account, Amazon Route 53, Elastic Load Balancing, Amazon VPC, Amazon EC2, Amazon SES, Amazon RDS, Amazon CloudWatch, AWS CloudFormation, AWS Elastic Beanstalk, Amazon Lex, New Quick Starts on deploying IBM MQ on AWS and deploying NGNIX Plus on the AWS Cloud.
In this episode Simon gets you caught up on some useful new services including those to help you with ETL, Migration and Data Leakage protection! Shownotes: AWS Glue: https://aws.amazon.com/blogs/aws/launch-aws-glue-now-generally-available/ AWS CloudHSM: https://aws.amazon.com/blogs/aws/aws-cloudhsm-update-cost-effective-hardware-key-management/ AWS Macie: https://aws.amazon.com/blogs/aws/launch-amazon-macie-securing-your-s3-buckets/ AWS IAM Console: https://aws.amazon.com/about-aws/whats-new/2017/07/the-aws-iam-console-now-remembers-your-preferences-for-table-column-selections-and-policy-viewing-and-editing/ AWS Migration Hub: https://aws.amazon.com/blogs/aws/aws-migration-hub-plan-track-enterprise-application-migration/ Amazon EFS Encryption at Rest: https://aws.amazon.com/blogs/aws/new-encryption-at-rest-for-amazon-elastic-file-system-efs/ AWS Batch and CloudFormation: https://aws.amazon.com/about-aws/whats-new/2017/08/aws-batch-adds-support-for-aws-cloudformation/ AWS SAM Local: https://aws.amazon.com/blogs/aws/new-aws-sam-local-beta-build-and-test-serverless-applications-locally/ EC2 Systems Manager Maintenance Windows: https://aws.amazon.com/blogs/mt/maintenance-windows-support-for-new-task-types-using-amazon-ec2-systems-manager/ AWS CloudTrail in Amazon Lex: https://aws.amazon.com/about-aws/whats-new/2017/08/aws-cloudtrail-integration-is-now-available-in-amazon-lex/
Join Dr. Pete and Russ in another episode of AWS TechChat as they discuss the latest AWS announcements and updates around AWS CodeStar, Amazon Redshift, Amazon EC2, Amazon DynamoDB, AWS Database Migration, AWS X-Ray, Amazon Aurora, Amazon Rekognition, Amazon Polly, Amazon Lex, Amazon Mobile Hub Integration, AWS Lambda, AWS Marketplace and Simplified Pricing API.
Today is April 17th, 2017 and it's an all new Human Factors Cast hosted by Nick Roome with Blake Arnsdorff, and Mia Jaramillo. -MasterCard trials biometric bankcard with embedded fingerprint reader -Facebook is building a brain-computer interface -Facebook launches beta of Spaces, its goofy and fun social VR platform -MIT's app only needs a second to teach you a new language -Google Home can now recognize up to six voices and give personalized responses -Amazon Lex, the technology behind Alexa, opens up to developers -Eerie tech promises to copy anyone's voice from just 1 minute of audio -Biased bots: Human prejudices sneak into artificial intelligence systems -25 is 'golden age' for the ability to make random choices: At their peak, humans outcompete many computer algorithms in generating seemingly random patterns Follow us on Twitter: http://www.twitter.com/HFactorsPodcast Follow us on Facebook: https://www.facebook.com/HumanFactorsCast Follow us on Soundcloud: https://www.soundcloud.com/HumanFactorsCast Our official website: https://www.humanfactorscast.com Follow Nick: https://www.twitter.com/Nick_Roome Follow Blake: https://www.twitter.com/DontPanicUX Follow Mia: https://www.linkedin.com/in/manuelajaramillo/ Take a deeper look into the human element in our ever changing digital world. Human Factors Cast is a podcast that investigates the sciences of psychology, engineering, biomechanics, industrial design, physiology and anthropometry and how it affects our interaction with technology. As an online source for human factors, psychology, and design news, Human Factors Cast is your essential resource for new, exciting stories in the field.
CES 2017 What's Alexa up to at CES? Amazon Echo
Amazon Lex is a service for building conversational interfaces into any applications using voice and text. With Lex, the same deep learning engine that powers Amazon Alexa is now available to any developer, enabling you to build sophisticated, natural language chatbots into your new and existing applications. Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) to allow you to build highly engaging user experiences with lifelike, conversational interactions. In this introductory session, find out how Lex provides deep functionality and flexibility to empower you to define entirely new categories of products that are made possible through conversational interfaces.
Amazon Echo and Alexa have shown that voice interfaces provide significant benefits to users – interactions are easy, fast, and context-driven. In this hands-on session, you’ll see how to add compelling voice and chat interfaces to your mobile apps, using Amazon Lex for processing conversations and triggering corresponding actions in your backend systems, all without having to manage any infrastructure. You’ll leave knowing how to build apps that can “Find me a nearby hotel” or “Reorder supplies for the copier”.
Voice User Interface - Lex and Polly Google Home Amazon Echo Watson Conversation Amazon Lex Amazon Polly
...Eventually, someone has to clean up the leftover pizza. ...That sweet OpEx. ..."Easy to stay." Amazon came out with a slew of features last week. This week we discuss them and take some cracks at the broad, portfolio approach at AWS compared to historic (like .Net) platform approaches. We also discuss footwear and what to eat and where to stay in Las Vegas. Footware Kenneth Cole slip on shoes (http://amzn.to/2gH6OzD). Keen Austin shoes, slip-on (http://amzn.to/2h2gveX) and lace (http://amzn.to/2ggll4y). The Doc Martin's Coté used to wear, Hickmire (http://amzn.to/2hlPnIJ). Mid-roll Coté: the Cloud Native roadshows are over, but check out the cloud native WIP I have at cote.io/cloud2 (http://cote.io/cloud2) or, just check out some excerpts on working with auditors (https://medium.com/@cote/auditors-your-new-bffs-918c8671897a#.et5tv7p7l), selecting initial projects (https://medium.com/@cote/getting-started-picking-your-first-cloud-native-projects-or-every-digital-transformation-starts-d0b1295f3712#.v7jpyjvro), and dealing with legacy (https://medium.com/built-to-adapt/deal-with-legacy-before-it-deals-with-you-cc907c800845#.ixtz1kqdz). Matt: Presenting at the CC Dojo #3, talking DevOps in Tokyo (https://connpass.com/event/46308/) AWS re:Invent Matt Ray heroically summarizes all here. Richard has a write-up as well (https://www.infoq.com/news/2016/12/aws-reinvent-recap). RedMonk re:Cap (http://redmonk.com/sogrady/2016/12/07/the-redmonk-reinvent-recap/) Global Partner Summit Don't hedge your bets, "AWS has no time for uncommitted partners" (http://www.zdnet.com/article/andy-jassy-warns-aws-has-no-time-for-uncommitted-partners/) "10,000 new Partners have joined the APN in the past 12 months" (https://aws.amazon.com/blogs/aws/aws-global-partner-summit-report-from-reinvent-2016/) Day 1 - "I'd like to tell you about…" Amazon Lightsail (https://aws.amazon.com/blogs/aws/amazon-lightsail-the-power-of-aws-the-simplicity-of-a-vps/) Monthly instances with memory, cpu, storage & static IP Bitnami! Hello Digital Ocean & Linode Amazon Athena (https://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-s3/) S3 SQL queries, based on Presto distributed SQL engine JSON, CSV, log files, delimited text, others Coté: this seems pretty amazing. Amazon Rekognition (https://aws.amazon.com/blogs/aws/amazon-rekognition-image-detection-and-recognition-powered-by-deep-learning/) Image detection & recognition Amazon Polly (https://aws.amazon.com/blogs/aws/polly-text-to-speech-in-47-voices-and-24-languages/) Text to Speech in 47 Voices and 24 Languages Coté: Makes transcripts? Amazon Lex (https://aws.amazon.com/blogs/aws/amazon-lex-build-conversational-voice-text-interfaces/) Conversational voice & text interface builder (ie. chatbots) Coté: make chat-bots and such. AWS Greengrass (https://aws.amazon.com/blogs/aws/aws-greengrass-ubiquitous-real-world-computing/) Local Lambda processing for IoT Coté: is this supposed to be, like, for running Lambda things on disconnected devices? Like fPaaS in my car? AWS Snowball Edge & Snowmobile (https://aws.amazon.com/blogs/aws/aws-snowball-edge-more-storage-local-endpoints-lambda-functions/) Local processing of data? S3/NFS and local Lambda processing? I'm thinking easy hybrid on-ramp Not just me (https://twitter.com/CTOAdvisor/status/806320423881162753) More on it (http://www.techrepublic.com/article/how-amazon-is-moving-closer-to-on-premises-compute-with-snowball-edge/) Move exabytes in weeks (https://aws.amazon.com/blogs/aws/aws-snowmobile-move-exabytes-of-data-to-the-cloud-in-weeks/) "Snowmobile is a ruggedized, tamper-resistant shipping container 45 feet long, 9.6 feet high, and 8 feet wide. It is waterproof, climate-controlled, and can be parked in a covered or uncovered area adjacent to your existing data center." Coté: LEGOS! More instance types, Elastic GPUs, F1 Instances, PostgreSQL for Aurora High I/O (I3 3.3 million IOPs 16GB/s), compute (C5 72 vCPUs, 144 GiB), memory (R4 488 Gib), burstable (T2 shared) (https://aws.amazon.com/blogs/aws/ec2-instance-type-update-t2-r4-f1-elastic-gpus-i3-c5/) Mix EC2 instance type with a 1-8 GiB GPU (https://aws.amazon.com/blogs/aws/in-the-work-amazon-ec2-elastic-gpus/) More! (https://aws.amazon.com/blogs/aws/developer-preview-ec2-instances-f1-with-programmable-hardware/) F1: FPGA EC2 instances, also available for use in the AWS Marketplace (https://aws.amazon.com/blogs/aws/amazon-aurora-update-postgresql-compatibility/) RDS vs. Aurora Postgres? Aurora is more fault tolerant apparently? Day 2 AWS OpsWorks for Chef Automate (https://aws.amazon.com/opsworks/chefautomate/) Chef blog (https://blog.chef.io/2016/12/01/chef-automate-now-available-fully-managed-service-aws/) Fully managed Chef Server & Automate Previous OpsWorks now called "OpsWorks Stacks" Cloud Opinion approves the Chef strategy (https://twitter.com/cloud_opinion/status/804374597449584640) EC2 Systems Manager Tools for managing EC2 & on-premises systems (https://aws.amazon.com/ec2/systems-manager/) AWS Codebuild Managed elastic build service with testing (https://aws.amazon.com/blogs/aws/aws-codebuild-fully-managed-build-service/) AWS X-Ray (https://aws.amazon.com/blogs/aws/aws-x-ray-see-inside-of-your-distributed-application/) Distributed debugging service for EC2/ECS/Lambda? "easy way for developers to "follow-the-thread" as execution traverses EC2 instances, ECS containers, microservices, AWS database and messaging services" AWS Personal Health Dashboard (https://aws.amazon.com/blogs/aws/new-aws-personal-health-dashboard-status-you-can-relate-to/) Personalized AWS monitoring & CloudWatch Events auto-remediation Disruptive to PAAS monitoring & APM (New Relic, DataDog, App Dynamics) AWS Shield (https://aws.amazon.com/blogs/aws/aws-shield-protect-your-applications-from-ddos-attacks/) DDoS protection Amazon Pinpoint Mobile notification & analytics service (https://aws.amazon.com/blogs/aws/amazon-pinpoint-hit-your-targets-with-aws/) AWS Glue Managed data catalog & ETL (extract, transform & load) service for data analysis AWS Batch Automated AWS provisioning for batch jobs (https://aws.amazon.com/blogs/aws/aws-batch-run-batch-computing-jobs-on-aws/) C# in Lamba, Lambda Edge, AWS Step Functions Werner Vogels: "serverless, there is no cattle, only the herd" Lambda Edge (https://aws.amazon.com/blogs/aws/coming-soon-lambda-at-the-edge/) for running in response to CloudFront events, ""intelligent" processing of HTTP requests at a location that is close" More (https://aws.amazon.com/blogs/aws/new-aws-step-functions-build-distributed-applications-using-visual-workflows/) Step Functions a visual workflow "state machine" for Lambda functions More (https://serverless.zone/faas-is-stateless-and-aws-step-functions-provides-state-as-a-service-2499d4a6e412) BLOX (https://aws.amazon.com/blogs/compute/introducing-blox-from-amazon-ec2-container-service/): EC2 Container Service Scheduler Open source scheduler, watches CloudWatch events for managing ECS deployments Blox.github.io Analysis discussion for all the AWS stuff Jesus! I couldn't read it all! So, what's the role of Lambda here? It seems like the universal process thingy - like AppleScript, bash scripts, etc. for each part: if you need/want to add some customization to each thing, put a Lambda on it. What's the argument against just going full Amazon, in the same way you'd go full .Net, etc.? Is it cost? Lockin? Performance (people always talk about Amazon being kind of flakey at times - but what isn't flakey, your in-house run IT? Come on.) BONUS LINKS! Not covered in episode. Docker for AWS "EC2 Container Service, Elastic Beanstalk, and Docker for AWS all cost nothing; the only costs are those incurred by using AWS resources like EC2 or EBS." (http://www.infoworld.com/article/3145696/application-development/docker-for-aws-whos-it-really-for.html) Docker gets paid on usage? Apparently an easier learning curve than ECS + AWS services, but whither Blox? Time to Break up Amazon? Someone has an opinion (http://www.geekwire.com/2016/new-study-compares-amazon-19th-century-robber-barons-urges-policymakers-break-online-retail-giant/) HPE Discover, all about the "Hybrid Cloud" Hybrid it up! (http://www.zdnet.com/article/hpe-updates-its-converged-infrastructure-hybrid-cloud-software-lineup/) Killed "The Machine" (http://www.theregister.co.uk/2016/11/29/hp_labs_delivered_machine_proof_of_concept_prototype_but_machine_product_is_no_more/) HPE's Synergy software, based on OpenStack (is this just Helion rebranded?) Not great timing for a conference Sold OpenStack & CloudFoundry bits to SUSE (http://thenewstack.io/suse-add-hpes-openstack-cloud-foundry-portfolio-boost-kubernetes-investment/), the new "preferred Linux partner": How Google is Challenging AWS Ben on public cloud (https://stratechery.com/2016/how-google-cloud-platform-is-challenging-aws/) "open-sourcing Kubernetes was Google's attempt to effectively build a browser on top of cloud infrastructure and thus decrease switching costs; the company's equivalent of Google Search will be machine learning." Exponent.fm episode 097 — Google vs AWS (http://exponent.fm/episode-097-google-versus-aws/) Recommendations Brandon: Apple Wifi Calling (https://support.apple.com/en-us/HT203032) & Airplane mode (https://support.apple.com/en-us/HT204234). Westworld worth watching (http://www.hbo.com/westworld). Matt: Backyard Kookaburras (https://www.youtube.com/watch?v=DmNn7P59HcQ). Magpies too! (http://www.musicalsoupeaters.com/swooping-season/) This gif (https://media.giphy.com/media/wik7sKOl86OFq/giphy.gif). Coté: W Hotel in Las Vegas (http://www.wlasvegas.com/) and lobster eggs benedict (https://www.instagram.com/p/BNxAyQbjKCQ/) at Payard's in Ceasers' Outro: "I need my minutes," Soul Position (http://genius.com/Soul-position-i-need-my-minutes-lyrics).