Blog Posts Process Management

Democratizing Generative AI with CPU-based Inference

Blog: Oracle BPM

The Generative AI market faces a significant challenge regarding hardware availability worldwide. Much of the expensive GPU hardware capacity is being used for Large Language Model (LLM) training therefore creating an availability crunch for users wanting to deploy, evaluate foundation models in their own cloud tenancy/subscriptions for inference and fine tuning the ML models. CPUs are a choice for various workloads. Below is our experience working with CPUs including performance test results.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/democratizing-generative-ai-with-cpu-based-inference/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×