
Amazon is throwing its hat into the generative AI ring. But rather than bud AI models entirely by itself, it’s recruiting third parties to host models on AWS.
AWS today unveed Amazon Bedrock, which provides a way to bud generative AI-powered apps via pretrained models from startups including AI21 Labs, Anthropic and Stabity AI. Avaable in a “limited preview,” Bedrock also offers access to Titan FMs (foundation models), a famy of models trained in-house by AWS.
“Applying machine learning to the real world — solving real business problems at scale — is what we do best,” Vasi Phomin, VP of generative AI at AWS, told technewss in a phone interview. “We think every application out there can be reimagined with generative AI.”
The debut of Bedrock was somewhat telegraphed by AWS’ recently inked partnerships with generative AI startups in the past few months, in addition to its growing investments in the tech required to bud generative AI apps.
Last November, Stabity AI selected AWS as its preferred cloud provider, and in March, Hugging Face and AWS collaborated to bring the former’s text-generating models onto the AWS platform. More recently, AWS launched a generative AI accelerator for startups and said it would work with Nvidia to bud “next-generation” infrastructure for training AI models.
Bedrock and custom models
Bedrock is Amazon’s most forceful play yet for the generative AI market, which could be worth close to $110 blion by 2030, according to estimates from Grand View Research.
With Bedrock, AWS customers can opt to tap into AI models from a variety of different providers, including AWS, via an API. The detas are a bit murky — Amazon hasn’t announced formal pricing, for one. But the company did emphasize that Bedrock is aimed at large customers buding “enterprise-scale” AI apps, differentiating it from some of the AI model hosting services out there, like Replicate (plus the incumbent rivals Google Cloud and Azure).
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've but — without the big spend. Avaable through May 9 or whe tables last.
One presumes that generative AI model vendors were incentivized by AWS’ reach or potential revenue sharing to join Bedrock. Amazon didn’t reveal terms of the model licensing or hosting agreements, however.
The third-party models hosted on Bedrock include AI21 Labs’ Jurassic-2 famy, which are multingual and can generate text in Spanish, French, German, Portuguese, Italian and Dutch. Claude, Anthropic's model on Bedrock, can perform a range of conversational and text-processing tasks. Meanwhe, Stabity AI’s suite of text-to-image Bedrock-hosted models, including Stable Diffusion, can generate images, art, logos and graphic designs.

As for Amazon’s bespoke offerings, the Titan FM famy comprises two models at present, with presumably more to come in the future: a text-generating model and an embedding model. The text-generating model, akin to OpenAI’s GPT-4 (but not necessary on a par performance-wise), can perform tasks like writing blog posts and emas, summarizing documents and extracting information from databases. The embedding model translates text inputs like words and phrases into numerical representations, known as embeddings, that contain the semantic meaning of the text. Phomin claims it’s simar to one of the models that powers searches on Amazon.com.
AWS customers can customize any Bedrock model by pointing the service at a few labeled examples in Amazon S3, Amazon’s cloud storage plan — as few as 20 is enough. No customer data is used to train the underlying models, Amazon says.
“At AWS … we’ve played a key role in democratizing machine learning and making it accessible to anyone who wants to use it,” Phomin said. “Amazon Bedrock is the easiest way to bud and scale generative AI applications with foundation models.”
Of course, given the unanswered legal questions surrounding generative AI, one wonders exactly how many customers wl bite.
Microsoft has seen success with its generative AI model suite, Azure OpenAI Service, which bundles OpenAI models with additional features geared toward enterprise customers. As of March, over 1,000 customers were using Azure OpenAI Service, Microsoft said in a blog post.
But there are several lawsuits pending over generative AI tech from companies including OpenAI and Stabity AI, brought by plaintiffs who allege that copyrighted data, mostly art, was used without permission to train the generative models. (Generative AI models “learn” to create art, code and more by “training” on sample images and text, usually scraped indiscriminately from the web.) Another case making its way through the courts seeks to establish whether code-generating models that don’t give attribution or credit can in fact be commercialized, and an Australian mayor has threatened a defamation suit against OpenAI for inaccuracies spouted by its generative model ChatGPT.
Phomin didn’t instl much confidence, frankly, refusing to say which data exactly Amazon’s Titan FM famy was trained on. Instead, he stressed that the Titan models were but to detect and remove “harmful” content in the data AWS customers provide for customization, reject “inappropriate” content users input and fter outputs containing hate speech, profanity and violence.
Of course, even the best ftering systems can be circumvented, as demonstrated by ChatGPT. So-called prompt injection attacks against ChatGPT and simar models have been used to write malware, identify exploits in open source code and generate abhorrently sexist, racist and misinformational content. (Generative AI models tend to amplify biases in training data, or — if they run out of relevant training data — simply make things up.)
But Phomin brushed aside those concerns.
“We’re committed to the responsible use of these technologies,” he said. “We’re monitoring the regulatory landscape out there… we have a lot of lawyers helping us look at which data we can use and which we can’t use.”
Phomin’s attempts at assurance aside, brands might not want to be on the hook for all that could go wrong. (In the event of a lawsuit, it’s not entirely clear whether AWS customers, AWS itself or the offending model’s creator would be held liable.) But individual customers might — particularly if there’s no charge for the privege.
CodeWhisperer, Trainium and Inferentia2 launch in GA
On the subject and coinciding with its big generative AI push today, Amazon made CodeWhisperer, its AI-powered code-generating service, free of charge to developers without any usage restrictions.
The move suggests that CodeWhisperer hasn’t seen the uptake Amazon hoped it would. Its chief rival, GitHub’s Copot, had over a mlion users as of January, thousands of which are enterprise customers. CodeWhisperer has ground to make up, surely — which it aims to do on the corporate side with the simultaneous launch of CodeWhisperer Professional Tier. CodeWhisperer Professional Tier adds single sign-on with AWS Identity and Access Management integration as well as higher limits on scanning for security vulnerabities.
CodeWhisperer launched in late June as part of the AWS IDE Toolkit and AWS Toolkit IDE extensions as a response, of sorts, to the aforementioned Copot. Trained on blions of lines of publicly avaable open source code and Amazon’s own codebase, as well as documentation and code on public forums, CodeWhisperer can autocomplete entire functions in languages like Java, JavaScript and Python based on only a comment or a few keystrokes.

CodeWhisperer now supports several additional programming languages — specifically Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL and Scala — and, as before, highlights and optionally fters the license associated with functions it suggests that bear a resemblance to existing snippets found in its training data.
The highlighting is an attempt to ward off the legal challenges GitHub’s facing with Copot. Time wl tell whether it’s successful.
“Developers can become a lot more productive with these tools,” Phomin said. “It’s difficult for developers to be up to date on everything… tools like this help them not have to worry about it.”
In less controversial territory, Amazon announced today that it’s launching Elastic Cloud Compute (EC2) Inf2 instances in general avaabity, powered by the company’s AWS Inferentia2 chips, which were previewed last year at Amazon’s re:Invent conference. Inf2 instances are designed to speed up AI runtimes, delivering ostensibly better throughput and lower latency for improved overall inference price performance.
In addition, Amazon EC2 Trn1n instances powered by AWS Trainium, Amazon’s custom-designed chip for AI training, is also generally avaable to customers as of today, Amazon announced. They offer up to 1600 Gbps of network bandwidth and are designed to deliver up to 20% higher performance over Trn1 for large, network-intensive models, Amazon says.
Both Inf2 and Trn1n compete with rival offerings from Google and Microsoft, like Google’s TPU chips for AI training.
“AWS offers the most effective cloud infrastructure for generative AI,” Phomin said with confidence. “One of the needs for customers is the right costs for dealing with these models … It’s one of the reasons why many customers haven’t put these models in production.”
Them’s fighting words — the growth of generative AI reportedly brought Azure to its knees. Wl Amazon suffer the same fate? That’s to be determined.